pyspark LDA在主题中获得单词

问题描述 投票:1回答:1

我正在尝试运行LDA。我不是将它应用于文字和文档,而是错误消息和错误原因。每行都是错误,每列都是错误原因。如果错误原因处于活动状态,则单元格为1;如果错误原因未处于活动状态,则为0。现在我试图获取每个创建的主题(这里相当于错误模式)错误原因名称(不仅仅是索引)。我到目前为止的代码似乎有效,如下所示

# VectorAssembler combines all columns into one vector
assembler = VectorAssembler(
    inputCols=list(set(df.columns) - {'error_ID'}),
    outputCol="features")
lda_input = assembler.transform(df)

# Train LDA model
lda = LDA(k=5, maxIter=10, featuresCol= "features")
model = lda.fit(lda_input)

# A model with higher log-likelihood and lower perplexity is considered to be good.
ll = model.logLikelihood(lda_input)
lp = model.logPerplexity(lda_input)
print("The lower bound on the log likelihood of the entire corpus: " + str(ll))
print("The upper bound on perplexity: " + str(lp))

# Describe topics.
topics = model.describeTopics(7)
print("The topics described by their top-weighted terms:")
topics.show(truncate=False)

# Shows the result
transformed = model.transform(lda_input)
print(transformed.show(truncate=False))

我的输出是:

enter image description here for each row基于https://spark.apache.org/docs/latest/mllib-clustering.html#latent-dirichlet-allocation-lda我添加了那部分,它不起作用:

 topics = model.topicsMatrix()
    for topic in range(10):
        print("Topic " + str(topic) + ":")
        for word in range(0, model.vocabSize()): 
            print(" " + str(topics[word][topic]))

我现在如何获得最高错误 - 原因/找到与术语索引相对应的列?

apache-spark pyspark lda topic-modeling
1个回答
0
投票

为了迭代DenseMatrix,您需要将其转换为数组。这不应该给出错误。但是我不确定打印结果,因为它取决于您的数据。

topn_words = 10
num_topics = 10

topics = model.topicsMatrix().toArray()
for topic in range(num_topics):
    print("Topic " + str(topic) + ":")
    for word in range(0, topn_words): 
        print(" " + str(topics[word][topic]))
© www.soinside.com 2019 - 2024. All rights reserved.