跳到内容

实体

pipeline pipeline

实体流水线将 token 分类器应用于文本并提取实体/标签组合。

示例

以下是使用此流水线的一个简单示例。

from txtai.pipeline import Entity

# Create and run pipeline
entity = Entity()
entity("Canada's last fully intact ice shelf has suddenly collapsed, " \
       "forming a Manhattan-sized iceberg")

# Extract entities using a GLiNER model which supports dynamic labels
entity = Entity("gliner-community/gliner_medium-v2.5")
entity("Canada's last fully intact ice shelf has suddenly collapsed, " \
       "forming a Manhattan-sized iceberg", labels=["country", "city"])

有关更详细的示例,请参见下面的链接。

Notebook 描述
实体提取工作流 识别实体/标签组合 Open In Colab
使用 txtai 解析星星 探索已知恒星、行星、星系的太空知识图谱 Open In Colab

配置驱动的示例

流水线可以使用 Python 或配置运行。流水线可以通过流水线的英文小写名称在配置中实例化。配置驱动的流水线可以使用工作流API运行。

config.yml

# Create pipeline using lower case class name
entity:

# Run pipeline with workflow
workflow:
  entity:
    tasks:
      - action: entity

使用工作流运行

from txtai import Application

# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("entity", ["Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg"]))

使用 API 运行

CONFIG=config.yml uvicorn "txtai.api:app" &

curl \
  -X POST "http://localhost:8000/workflow" \
  -H "Content-Type: application/json" \
  -d '{"name":"entity", "elements": ["Canadas last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg"]}'

方法

流水线的 Python 文档。

__init__(path=None, quantize=False, gpu=True, model=None, **kwargs)

源代码位于 txtai/pipeline/text/entity.py
25
26
27
28
29
30
31
32
33
34
35
36
37
def __init__(self, path=None, quantize=False, gpu=True, model=None, **kwargs):
    # Create a new entity pipeline
    self.gliner = self.isgliner(path)
    if self.gliner:
        if not GLINER:
            raise ImportError('GLiNER is not available - install "pipeline" extra to enable')

        # GLiNER entity pipeline
        self.pipeline = GLiNER.from_pretrained(path)
        self.pipeline = self.pipeline.to(Models.device(Models.deviceid(gpu)))
    else:
        # Standard entity pipeline
        super().__init__("token-classification", path, quantize, gpu, model, **kwargs)

__call__(text, labels=None, aggregate='simple', flatten=None, join=False, workers=0)

将 token 分类器应用于文本并提取实体/标签组合。

参数

名称 类型 描述 默认值
text

文本|列表

必需
labels

要接受的实体类型标签列表,默认为 None(接受所有)

None
aggregate

合并多 token 实体的选项 - 可选项有 "simple" (默认), "first", "average" 或 "max"

'simple'
flatten

如果存在,将输出扁平化为标签列表。接受布尔值或浮点值,仅保留分数大于该值的项。

None
join

如果为 True,将扁平化的输出连接成字符串,如果 flatten 未设置则忽略

False
workers

用于处理数据的并发 worker 数量,默认为 None

0

返回值

类型 描述

根据 flatten 参数,返回 (实体, 实体类型, 分数) 列表 或 实体列表

源代码位于 txtai/pipeline/text/entity.py
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
def __call__(self, text, labels=None, aggregate="simple", flatten=None, join=False, workers=0):
    """
    Applies a token classifier to text and extracts entity/label combinations.

    Args:
        text: text|list
        labels: list of entity type labels to accept, defaults to None which accepts all
        aggregate: method to combine multi token entities - options are "simple" (default), "first", "average" or "max"
        flatten: flatten output to a list of labels if present. Accepts a boolean or float value to only keep scores greater than that number.
        join: joins flattened output into a string if True, ignored if flatten not set
        workers: number of concurrent workers to use for processing data, defaults to None

    Returns:
        list of (entity, entity type, score) or list of entities depending on flatten parameter
    """

    # Run token classification pipeline
    results = self.execute(text, labels, aggregate, workers)

    # Convert results to a list if necessary
    if isinstance(text, str):
        results = [results]

    # Score threshold when flatten is set
    threshold = 0.0 if isinstance(flatten, bool) else flatten

    # Extract entities if flatten set, otherwise extract (entity, entity type, score) tuples
    outputs = []
    for result in results:
        if flatten:
            output = [r["word"] for r in result if self.accept(r["entity_group"], labels) and r["score"] >= threshold]
            outputs.append(" ".join(output) if join else output)
        else:
            outputs.append([(r["word"], r["entity_group"], float(r["score"])) for r in result if self.accept(r["entity_group"], labels)])

    return outputs[0] if isinstance(text, str) else outputs