704 questions
1
vote
0
answers
31
views
How to handle unstable best_iteration in LightGBM when using Optuna for hyperparameter optimization?
I'm using Optuna to optimize LightGBM hyperparameters, and I'm running into an issue with the variability of best_iteration across different random seeds.
Current Setup
I train multiple models with ...
0
votes
1
answer
96
views
LightGBM randomly crashing the jupyter notebook Kernel "[LightGBM] [Fatal] Could not open None" [closed]
I'm using the lightGBM library for Python. I'm finding hyperparameters via Optuna, so multiple trainings are happening (trails >> 100). If I run my python file with the hyperparameter tuning ...
0
votes
0
answers
77
views
tidymodels lightgbm hyperparameters training issue
I have an issue with training lightgbm models through tidymodels.
There seems to be some sort of issues in how the hyperparameters are translated between tidymodels and lightgbm.
This is my code:
...
1
vote
1
answer
41
views
Lightgbm early_stopping: min_delta doesn't work
I was using lightgbm with early_stopping and min_delta, but according to the result, min_delta seems to have no effect.
final_model = lgb.train(
params,
train_data,
...
0
votes
0
answers
71
views
How to use GPU to train LGBM model from SynapseML
I have gone through the entire docs on SynapseML LightGBMRanker module. There was no attribute I could pass to tell the model to use CUDA GPU for training.
For e.g. for the plain LightGBM library, ...
5
votes
1
answer
131
views
Darts and LightGBM: original column names cannot be retrieved for feature importance
Problem
I am running a LightGBMModel via Darts with some (future) covariates. I want to understand the relevance of the different (lagged) features.
In particular, I would like to retrieve the feature ...
6
votes
1
answer
440
views
Constructing custom loss function in lightgbm
I have a pandas dataframe that records the outcome of F1 races:
data = {
"Race_ID": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4],
"Racer_Number": [1, 2, 3, ...
1
vote
0
answers
28
views
Cannot reappearance mse in lightgbm custom objective when set feature_fraction
import numpy as np
import lightgbm as lgb
def custom_mse_objective(preds, train_data):
labels = train_data.get_label()
grad = (preds - labels)
hess = np.ones_like(labels)
return grad, ...
3
votes
1
answer
367
views
lightgbm force variables to be in splits
Im trying to find a way to train a lightgbm model forcing to have some features to be in the splits, i.e.: "to be in the feature importance", then the predictions are afected by these ...
1
vote
1
answer
136
views
What is max_evals in hyperopt fmin function?
I'm trying to run a multi-classification problem. I have run a baseline lightGBM model with around 80% accuracy rate. I'm trying to further fine-tuning its hyperparameter using Hyperopt. However, when ...
0
votes
0
answers
38
views
Light GBM with Collaborate Google
I have some app developing in Google Colaborate and need some model with BigData learning, I'm using Light GBM and need to setup GPU / CUDA.
But when I trying to install such dependencies I have ...
3
votes
1
answer
61
views
Reproduce LGBMRegressor predictions by manually aggregate the values
I am trying to reproduce by myself the LGBMRegressor predictions so when I succeed I will switch mean with median. But for now it seems that I am not able to.
Here is a simple script that I created ...
-1
votes
1
answer
90
views
Why lightgbm .predict function has probabilities not between 0 and 1? [closed]
I wanna understand why in this code, I get the following results:
# Import necessary libraries
import pandas as pd
from sklearn.metrics import f1_score
from sklearn.model_selection import ...
2
votes
1
answer
111
views
lightgbm.cv: cvbooster.best_iteration always returns -1
I am migrating from XGBoost to LightGBM (since I need it's exact handling of interaction constraints) and I am struggling to understand the result of LightGBM CV. In the example below, the minimum log-...
1
vote
2
answers
114
views
Why can't I wrap LGBM?
I'm using LGBM to forecast the relative change of a numerical quantity. I'm using the MSLE (Mean Squared Log Error) loss function to optimize my model and to get the correct scaling of errors. Since ...