polars.io.csv.batched_reader.BatchedCsvReader.next_batches#

BatchedCsvReader.next_batches(n: int) list[DataFrame] | None[source]#

Read n batches from the reader.

The n chunks will be parallelized over the available threads.

Parameters:
n

Number of chunks to fetch. This is ideally >= number of threads

Returns:
list of DataFrames

Examples

>>> reader = pl.read_csv_batched(
...     "./tpch/tables_scale_100/lineitem.tbl",
...     separator="|",
...     try_parse_dates=True,
... )  
>>> reader.next_batches(5)