numpy 使用MultinomilNB的MemoryError

b5buobof  于 2023-03-23  发布在  其他
关注(0)|答案(1)|浏览(82)

我在大数据上使用sklearn.naive_bayes.MultinomialNB进行命名实体识别时出现MemoryError,其中train.shape = (416330, 97896)

data_train = pd.read_csv(path[0] + "train_SENTENCED.tsv", encoding="utf-8", sep='\t', quoting=csv.QUOTE_NONE)
data_test = pd.read_csv(path[0] + "test_SENTENCED.tsv", encoding="utf-8", sep='\t', quoting=csv.QUOTE_NONE)

print('TRAIN_DATA:\n', data_train.tail(5))

# FIT TRANSFORM
X_TRAIN = data_train.drop('Tag', axis=1)
X_TEST = data_test.drop('Tag', axis=1)
v = DictVectorizer(sparse=False)
X_train = v.fit_transform(X_TRAIN.to_dict('records'))
X_test = v.transform(X_TEST.to_dict('records'))
y_train = data_train.Tag.values
y_test = data_test.Tag.values
classes = np.unique(y_train)
classes = classes.tolist()

nb = MultinomialNB(alpha=0.01)
nb.partial_fit(X_train, y_train, classes)
new_classes = classes.copy()
new_classes.pop()
predictions = nb.predict(X_test)

错误输出如下:

Traceback (most recent call last):  
File "naive_bayes_classifier/main.py", line 107, in <module>  
      X_train = v.fit_transform(X_TRAIN.to_dict('records'))  
File "lib/python3.8/site-packages/sklearn/feature_extraction/_dict_vectorizer.py", line 313, in fit_transform  
      return self._transform(X, fitting=True)  
File "lib/python3.8/site-packages/sklearn/feature_extraction/_dict_vectorizer.py", line 282, in _transform  
      result_matrix = result_matrix.toarray()  
File "lib/python3.8/site-packages/scipy/sparse/compressed.py", line 1031, in toarray  
      out = self._process_toarray_args(order, out)  
File "lib/python3.8/site-packages/scipy/sparse/base.py", line 1202, in _process_toarray_args  
      return np.zeros(self.shape, dtype=self.dtype, order=order) 
numpy.core._exceptions.MemoryError: Unable to allocate 304. GiB for an array with shape (416330, 97896) and data type float64

虽然我为DictVectorizer(sparse=False, dtype=np.short)设置了这样的参数,但代码随后返回nb.partial(X_train, y_train, classes)行的错误。
如何防止这种内存错误?是否有适当的方法来解决它?我想过分裂训练集,但由于向量适合相应的数据集,它会是正确的解决方案吗?

ppcbkaq5

ppcbkaq51#

可悲的是,问题是,由于其复杂性,使用sklearn解决这个问题确实需要大量的内存。
.partial_fit()是一种很好的方法,但我建议将数据分块成更小的块,然后部分拟合分类器。尝试将数据集分成小块,看看是否有效。如果仍然出现相同的错误,也许可以尝试更小的位。

相关问题