我正在使用Python的hyperopt库来执行ML超参数的优化。特别是,我正在尝试使用这个函数来最小化来寻找lightgbm的最佳超参数:
def lgb_objective_map(params):
"""
objective function for lightgbm using MAP as success metric.
"""
# hyperopt casts as float
params['num_boost_round'] = int(params['num_boost_round'])
params['num_leaves'] = int(params['num_leaves'])
params['min_data_in_leaf'] = int(params['min_data_in_leaf'])
# need to be passed as parameter
params['verbose'] = -1
params['seed'] = 1
# Cross validation
cv_result = lgb.cv(
params,
lgtrain,
nfold=3,
metrics='binary_logloss',
num_boost_round=params['num_boost_round'],
early_stopping_rounds=20,
stratified=False,
)
# Update the number of trees based on the early stopping results
early_stop_dict[lgb_objective_map.i] = len(cv_result['binary_logloss-mean'])
params['num_boost_round'] = len(cv_result['binary_logloss-mean'])
# fit and predict
#model = lgb.LGBMRegressor(**params)
#model.fit(train,y_train,feature_name=all_cols,categorical_feature=cat_cols)
model= lgb.train(params=params,train_set=lgtrain)
preds = model.predict(X_test)
# add a column with predictions and rank
result = log_loss(y_test,preds)
# actual_predicted
actual_predicted = np.sum(y_test)/np.sum(preds)
print("INFO: iteration {} logloss {:.3f} actual on predicted ratio {:.3f}".format(lgb_objective_map.i,
result,actual_predicted))
lgb_objective_map.i+=1
return resulthyperopt调用是:
best = fmin(fn=lgb_objective_map,
space=lgb_parameter_space,
algo=tpe.suggest,
max_evals=200,
trials=trials)是否可以修改best调用,以便像lgbtrain, X_test, y_test一样将补充参数传递给lgb_objective_map?这将允许泛化对hyperopt的调用。
发布于 2019-06-22 04:50:44
来自functools的partial函数提供了一个有说服力的解决方案。
只需包装您的函数并添加所需的参数:
partial(yourFunction,arg_1,arg_2,...,arg_n)然后将其传递给惠普的fmin函数。
下面是一个玩具示例:
from functools import partial
from hyperopt import hp,fmin, STATUS_OK
def objective(params, data):
output = f(**params, data)
return {'loss': output , 'status': STATUS_OK}
fmin_objective = partial(objective, data=data)
bestParams = fmin(fn = fmin_objective ,space = params)https://stackoverflow.com/questions/54478779
复制相似问题