In my experience, using dask is an overkill if the original dataset fits in the memory and does not need parallel processing.
Dask is faster as it operates on a 'lazy loading' principle. This is the perfect solution for a large dataset that needs to be chunked and then processed.
But, in other scenarios, pandas is a perfectly reasonable choice of data structure