Is there a high level way to read the lines from the output file and the types recognized by the content structure?

Suppose I have an output file that I want to read and each line was created by concatenating multiple types, adding and adding list brackets,

[('tupleValueA','tupleValueB'), 'someString', ('anotherTupleA','anotherTupleB')]

      

I want to read lines. Now I can read them and work with a string to assign values ​​and types, but I was wondering if Python has a higher level method to do this.

After creating a function to do this, I tried to find a higher level approach but didn't find one.

0


a source to share


3 answers


What you are looking for is eval . But keep in mind that this function will evaluate and execute lines. So don't run it on an untrusted input!

>>> print eval("[('tupleValueA', 1), 'someString']")
[('tupleValueA', 1), 'someString']

      



If you have control over the script that generates the output file, then I would suggest using json encoding. The JSON format is very similar to the python string representation of lists and dictionaries. And it will be much safer and more reliable to read.

>>> import json
>>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
'["foo", {"bar": ["baz", null, 1.0, 2]}]'
>>> json.loads('["foo", {"bar": ["baz", null, 1.0, 2]}]')
["foo", {"bar": ["baz", null, 1.0, 2]}]

      

+2


a source


The problem you are describing is conventionally called serialization . JavaScript Object Notation (JSON) is one of the popular serialization protocols.



0


a source


It might be better to save the data with a module like pickle instead of using regular strings. This way, you avoid a lot of the problems that come with eval

and get a more reliable solution.

0


a source







All Articles