使用tensorflow和transformers标记Dataframe

cqoc49vn  于 2021-07-13  发布在  Java
关注(0)|答案(0)|浏览(248)

我在Pandas数据框中有一个带标签的数据集。

>>> df.dtypes
title          object
headline       object
byline         object
dateline       object
text           object
copyright    category
country      category
industry     category
topic        category
file           object
dtype: object

我正在建立一个模型来预测 topic 基于 text . 而 text 是一根大绳子, topic 是字符串列表。例如:

>>> df['topic'].head(5)
0    ['ECONOMIC PERFORMANCE', 'ECONOMICS', 'EQUITY ...
1      ['CAPACITY/FACILITIES', 'CORPORATE/INDUSTRIAL']
2    ['PERFORMANCE', 'ACCOUNTS/EARNINGS', 'CORPORAT...
3    ['PERFORMANCE', 'ACCOUNTS/EARNINGS', 'CORPORAT...
4    ['STRATEGY/PLANS', 'NEW PRODUCTS/SERVICES', 'C...

在我把它放到一个模型中之前,我必须对整个Dataframe进行标记化,但是当通过transformer的 Autotokenizer 我得到一个错误。

import pandas as pd
import numpy as np
import tensorflow as tf
from transformers import AutoTokenizer
import tensorflow_hub as hub
import tensorflow_text as text
from sklearn.model_selection import train_test_split

def preprocess_text(df):

    # Remove punctuations and numbers
    df['text'] = df['text'].str.replace('[^a-zA-Z]', ' ', regex=True)

    # Single character removal
    df['text'] = df['text'].str.replace(r"\s+[a-zA-Z]\s+", ' ', regex=True)

    # Removing multiple spaces
    df['text'] = df['text'].str.replace(r'\s+', ' ', regex=True)

    # Remove NaNs
    df['text'] = df['text'].fillna('')
    df['topic'] = df['topic'].cat.add_categories('').fillna('')

    return df

# Load tokenizer and logger

tf.get_logger().setLevel('ERROR')
tokenizer = AutoTokenizer.from_pretrained('roberta-large')

# Load dataframe with just text and topic columns

# Only loading first 100 rows for testing purposes

df = pd.DataFrame()
for chunk in pd.read_csv(r'C:\Users\pfortier\Documents\Reuters\test.csv', sep='|', chunksize=100,
                dtype={'topic': 'category', 'country': 'category', 'industry': 'category', 'copyright': 'category'}):
    df = chunk
    break
df = preprocess_text(df)

# Split dataset into train, test, val (70, 15, 15)

train, test = train_test_split(df, test_size=0.15)
train, val = train_test_split(train, test_size=0.15)

# Tokenize datasets

train = tokenizer(train, return_tensors='tf', truncation=True, padding=True, max_length=128)
val = tokenizer(val, return_tensors='tf', truncation=True, padding=True, max_length=128)
test = tokenizer(test, return_tensors='tf', truncation=True, padding=True, max_length=128)

我得到这个错误:

AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).

在线上 train = tokenizer(train, return_tensors='tf', truncation=True, padding=True, max_length=128) .
这是否意味着我必须把我的df变成一个列表?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题