Dask-ML
Dask-ML provides scalable machine learning in Python using Dask alongside popular machine learning libraries like Scikit-Learn, XGBoost, and others.
Dimensions of Scale
Challenge 1: Scaling Model Size
The first kind of scaling challenge comes when from your models growing so large or complex that it affects your workflow (shown along the vertical axis above). Under this scaling challenge tasks like model training, prediction, or evaluation steps will (eventually) complete, they just take too long. You’ve become compute bound.
To address these challenges you’d continue to use the collections you know and love (like the NumPy ndarray, pandas DataFrame, or XGBoost DMatrix) and use a Dask Cluster to parallelize the workload on many machines. The parallelization can occur through one of our integrations (like Dask’s joblib backend to parallelize Scikit-Learn directly) or one of Dask-ML’s estimators (like our hyper-parameter optimizers).
Challenge 2: Scaling Data Size
The second type of scaling challenge people face is when their datasets grow larger than RAM (shown along the horizontal axis above). Under this scaling challenge, even loading the data into NumPy or pandas becomes impossible.
To address these challenges, you’d use Dask’s one of Dask’s high-level collections like (Dask Array, Dask DataFrame or Dask Bag) combined with one of Dask-ML’s estimators that are designed to work with Dask collections. For example you might use Dask Array and one of our preprocessing estimators in dask_ml.preprocessing, or one of our ensemble methods in dask_ml.ensemble.
为者常成,行者常至
自由转载-非商用-非衍生-保持署名(创意共享3.0许可证)