4000-520-616
欢迎来到免疫在线!(蚂蚁淘生物旗下平台)  请登录 |  免费注册 |  询价篮
主营:原厂直采,平行进口,授权代理(蚂蚁淘为您服务)
咨询热线电话
4000-520-616
当前位置: 首页 > 新闻动态 >
新闻详情
pyspark 相似文章推荐-Word2Vec+Tfidf+LSH - 灰信网(软件开发博客...
来自 : www.freesion.com/article/32183 发布时间:2021-03-24
# 数据:article_id,channel_id,channel_name,title,content,sentencearticle_data = spark.sparkContext.textFile(r\'news_data\')article_data = article_data.map(lambda line: line.split(\'\\x01\'))print( 原始数据 , article_data.take(10))words_df = article_data.mapPartitions(segmentation).toDF([\'article_id\', \'channel_id\', \'words\'])print( 分词数据 , words_df.take(10))

数据格式:article_id,channel_id,channel_name,title,content,sentence
\"在这里插入图片描述\"
也可按实际情况正则去掉英文。

二、word2vec训练分词数据

# 二、word2vec训练分词数据from pyspark.ml.feature import Word2Vecw2v_model = Word2Vec(vectorSize=100, inputCol=\'words\', outputCol=\'vector\', minCount=3)model = w2v_model.fit(words_df)model.write().overwrite().save( models/word2vec_model/python.word2vec )from pyspark.ml.feature import Word2VecModelw2v_model = Word2VecModel.load( models/word2vec_model/python.word2vec )vectors = w2v_model.getVectors()vectors.show()

得到频道下所有词的词向量
\"在这里插入图片描述\"

三、关键词获取

1.关键词机器权重和词向量

# tdidf# 词频,即tffrom pyspark.ml.feature import CountVectorizer# vocabSize是总词汇的大小,minDF是文本中出现的最少次数cv = CountVectorizer(inputCol= words , outputCol= countFeatures , vocabSize=200 * 10000, minDF=1.0)# 训练词频统计模型cv_model = cv.fit(words_df)cv_model.write().overwrite().save( models/CV.model )from pyspark.ml.feature import CountVectorizerModelcv_model = CountVectorizerModel.load( models/CV.model )# 得出词频向量结果cv_result = cv_model.transform(words_df)# idffrom pyspark.ml.feature import IDFidf = IDF(inputCol= countFeatures , outputCol= idfFeatures )idf_model = idf.fit(cv_result)idf_model.write().overwrite().save( models/IDF.model )# tf-idffrom pyspark.ml.feature import IDFModelidf_model = IDFModel.load( models/IDF.model )tfidf_result = idf_model.transform(cv_result)# 选取前20个作为关键词,此处仅为词索引def sort_by_tfidf(partition): TOPK = 20 for row in partition: # 找到索引与IDF值并进行排序 _dict = list(zip(row.idfFeatures.indices, row.idfFeatures.values)) _dict = sorted(_dict, key=lambda x: x[1], reverse=True) result = _dict[:TOPK] for word_index, tfidf in result: yield row.article_id, row.channel_id, int(word_index), round(float(tfidf), 4)keywords_by_tfidf = tfidf_result.rdd.mapPartitions(sort_by_tfidf).toDF([ article_id , channel_id , index , weights ])# 构建关键词与索引keywords_list_with_idf = list(zip(cv_model.vocabulary, idf_model.idf.toArray()))def append_index(data): for index in range(len(data)): data[index] = list(data[index]) # 将元组转为list data[index].append(index) # 加入索引 data[index][1] = float(data[index][1])append_index(keywords_list_with_idf)sc = spark.sparkContextrdd = sc.parallelize(keywords_list_with_idf) # 创建rddidf_keywords = rdd.toDF([ keywords , idf , index ])# 求出文章关键词及权重tfidfkeywords_result = keywords_by_tfidf.join(idf_keywords, idf_keywords.index == keywords_by_tfidf.index).select( [ article_id , channel_id , keywords , weights ])print( 关键词权重 , keywords_result.take(10))# 文章关键词与词向量joinkeywords_vector = keywords_result.join(vectors, vectors.word == keywords_result.keywords, \'inner\')

得到文章关键词的权重如下,并与上步join得到其词向量
\"在这里插入图片描述\"

2.关键词权重乘以词向量

def compute_vector(row): return row.article_id, row.channel_id, row.keywords, row.weights * row.vectorarticle_keyword_vectors = keywords_vector.rdd.map(compute_vector).toDF([ article_id , channel_id , keywords , weightingVector ])# 利用 collect_set() 方法,将一篇文章内所有关键词的词向量合并为一个列表article_keyword_vectors.registerTempTable(\'temptable\')article_keyword_vectors = spark.sql( select article_id, min(channel_id) channel_id, collect_set(weightingVector) vectors from temptable group by article_id )

3.计算权重向量平均值

def compute_avg_vectors(row): x = 0 for i in row.vectors: x += i # 求平均值 return row.article_id, row.channel_id, x / len(row.vectors)article_vector = article_keyword_vectors.rdd.map(compute_avg_vectors).toDF([\'article_id\', \'channel_id\', \'articlevector\'])print( 文章最终vector ,article_vector.take(10))

将文章关键词权重与词向量加权平均后得到训练数据(此处为什么不用全量的词,而用关键词可以思考下)
\"在这里插入图片描述\"

四、LSH相似性

# LSHfrom pyspark.ml.feature import BucketedRandomProjectionLSH, MinHashLSHtrain = article_vector.select([\'article_id\', \'articlevector\'])# 1.BucketedRandomProjectionLSHbrp = BucketedRandomProjectionLSH(inputCol=\'articlevector\', outputCol=\'hashes\', numHashTables=4.0, bucketLength=10.0)model = brp.fit(train)similar = model.approxSimilarityJoin(train, train, 2.0, distCol=\'EuclideanDistance\')similar.show()# 2.MinHashLSHbrp = MinHashLSH(inputCol=\'articlevector\', outputCol=\'hashes\', numHashTables=4.0)model = brp.fit(train)# 获取所有相似对similar = model.approxSimilarityJoin(train, train, 2.0, distCol=\'EuclideanDistance\')similar.show()# 获取key指定个数的最近邻# similar = model.approxNearestNeighbors(train, key, 2)

BucketedRandomProjectionLSH结果
\"在这里插入图片描述\"

MinHashLSH结果
\"在这里插入图片描述\"
一般来讲第一种LSH在此处更适合。

本文链接: http://lshgroup.immuno-online.com/view-682695.html

发布于 : 2021-03-24 阅读(0)