Up to [cvs.NetBSD.org] / pkgsrc / math / py-xgboost
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
py-xgboost: mark as broken on SunOS NotImplementedError: System SunOS not supported
py-xgboost: updated to 2.1.4 2.1.4 The 2.1.4 patch release incorporates the following fixes on top of the 2.1.3 release: XGBoost is now compatible with scikit-learn 1.6 Build wheels with CUDA 12.8 and enable Blackwell support Adapt to RMM 25.02 logger changes
py-xgboost: updated to 2.1.3 2.1.3 [pyspark] Support large model size Fix rng for the column sampler Handle cudf.pandas proxy objects properly 2.1.2 Clean up and modernize release-artifacts.py Fix ellpack categorical feature with missing values. Fix unbiased ltr with training continuation. Fix potential race in feature constraint. Fix boolean array for arrow-backed DF. Ensure that pip check does not fail due to a bad platform tag Check cub errors Limit the maximum number of threads. Fixes for large size clusters. POSIX compliant poll.h and mmap
*: clean-up after python38 removal
py-xgboost: updated to 2.1.1 The 2.1.1 patch release make the following bug fixes: [Dask] Disable broadcast in the scatter call so that predict function won't hang [Dask] Handle empty partitions correctly Fix federated learning for the encrypted GRPC backend Fix a race condition in column splitter Gracefully handle cases where system files like /sys/fs/cgroup/cpu.max are not readable by the user Fix build and C++ tests for FreeBSD Clarify the requirement Pandas 1.2+ More robust endianness detection in R package build In addition, it contains several enhancements: Publish JVM packages targeting Linux ARM64 Publish a CPU-only wheel under name xgboost-cpu Support building with CUDA Toolkit 12.5 and latest CCCL
py-xgboost: insists on gcc 8.1+
py-xgboost: remove unused REPLACE_; spotted by @wiz
py-xgboost: updated to 2.0.3 2.0.3 [backport][sklearn] Fix loading model attributes. [backport][py] Use the first found native library. [backport] [CI] Upload libxgboost4j.dylib (M1) to S3 bucket [jvm-packages] Fix POM for xgboost-jvm metapackage
*: remove more references to Python 3.7
*: restrict py-numpy users to 3.9+ in preparation for update
py-xgboost: updated to 1.7.6 1.7.6 Patch Release Bug Fixes Fix distributed training with mixed dense and sparse partitions. Fix monotone constraints on CPU with large trees. [spark] Make the spark model have the same UID as its estimator Optimize prediction with QuantileDMatrix. Document Improve doxygen Update the cuDF pip index URL. Maintenance Fix tests with pandas 2.0.
py-xgboost: added version 1.7.5 XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. The same code runs on major distributed environment (Kubernetes, Hadoop, SGE, Dask, Spark, PySpark) and can solve problems beyond billions of examples.