polars.LazyFrame.collect_batches#
- LazyFrame.collect_batches(
- *,
- chunk_size: int | None = None,
- maintain_order: bool = True,
- lazy: bool = False,
- engine: EngineType = 'auto',
- optimizations: QueryOptFlags = (),
Evaluate the query in streaming mode and get a generator that returns chunks.
This allows streaming results that are larger than RAM to be written to disk.
The query will always be fully executed unless
stopis called, so you should call next until all chunks have been seen.Warning
This functionality is considered unstable. It may be changed at any point without it being considered a breaking change.
Warning
This method is much slower than native sinks. Only use it if you cannot implement your logic otherwise.
- Parameters:
- chunk_size
The number of rows that are buffered before a chunk is given.
- maintain_order
Maintain the order in which data is processed. Setting this to
Falsewill be slightly faster.- lazy
Start the query when first requesting a batch.
- engine
Select the engine used to process the query (default
"auto"):"auto": use the engine set byConfig.set_engine_affinityor thePOLARS_ENGINE_AFFINITYenvironment variable, falling back to"streaming"if unset."in-memory": use the in-memory engine before writing, this is the default engine."streaming": use the streaming engine, which processes queries in batches, reducing memory pressure and often outperforming the in-memory engine. This will soon become the default engine of Polars."gpu": use the CUDA GPU engine (requires an Nvidia GPU andcudf-polars). Pass aGPUEngineobject for fine-grained control.
If the selected engine cannot run the query, Polars falls back to the streaming engine.
- optimizations
The optimization passes done during query optimization.
Examples
>>> lf = pl.scan_csv("/path/to/my_larger_than_ram_file.csv") >>> for df in lf.collect_batches(): ... print(df)