Python packet dataset describes itself as databases for lazy people and they are correct.
For saving data with dataset all you need is just a Python dictionary, the keys of the dictionary are columns in a table and that is all.
db = dataset.connect('sqlite:///:memory:')
table = db['sometable']
table.insert(dict(name='John Doe', age=37))
table.insert(dict(name='Jane Doe', age=34, gender='female'))
john = table.find_one(name='John Doe')
Dataset will automatically make all tables and columns necessary.
Internal data is stored in SQLite, PostgreSQL or MySQL database, my experience has only been with SQLite so far.
In one project I use it just for memory database, after scraping data from a website it is stored in-memory SQLite.
Then I can use standard dataset API to retrieve data with certain criteria and sort it, before emailing it.
On another project, I use it to store data in SQLite and later to retrieve it.
I must admit that for everything else than basic searching, filtering and sorting you have to write SQL queries.
One useful feature is upsert, upsert is a smart combination of insert and update.
If rows with matching keys exist they will be updated, otherwise a new row is inserted in the table.
There is also a feature to export data to CSV or JSON.
If you think that using DB on your next project is overkill, but you do need to filter, search or sort data, take a look at datase.
It is much better than to make custom solutions, I know because I did stored data in pickle format and wrote a custom function for filtering, sorting and retrieving data from pickle, before I learned about dataset.