🎓 作者:计算机毕设小月哥 | 软件开发专家
🖥️ 简介:8年计算机软件程序开发经验。精通Java、Python、微信小程序、安卓、大数据、PHP、.NET|C#、Golang等技术栈。
🛠️ 专业服务 🛠️

  • 需求定制化开发
  • 源码提供与讲解
  • 技术文档撰写(指导计算机毕设选题【新颖+创新】、任务书、开题报告、文献综述、外文翻译等)
  • 项目答辩演示PPT制作

🌟 欢迎:点赞 👍 收藏 ⭐ 评论 📝
👇🏻 精选专栏推荐 👇🏻 欢迎订阅关注!
大数据实战项目
PHP|C#.NET|Golang实战项目
微信小程序|安卓实战项目
Python实战项目
Java实战项目
🍅 ↓↓主页获取源码联系↓↓🍅

基于大数据的北京市医保药品数据分析系统-功能介绍

基于大数据的北京市医保药品数据分析系统是一个综合运用现代大数据技术栈构建的智能化数据分析平台,该系统采用Hadoop分布式存储框架结合Spark大数据计算引擎作为核心技术架构,通过HDFS实现海量医保药品数据的可靠存储,利用Spark SQL进行高效的数据查询与处理,并集成Pandas和NumPy等Python科学计算库进行深度数据分析。系统后端采用Django框架构建RESTful API服务,前端基于Vue.js配合ElementUI组件库和Echarts可视化图表库打造直观友好的用户交互界面,数据持久化层选用MySQL关系型数据库确保数据一致性和完整性。在功能实现方面,系统从药品核心属性分布、生产企业市场份额、医保报销限制策略、中药与配方颗粒专题以及基于机器学习算法的关联聚类分析等五个核心维度出发,深入挖掘北京市医保药品数据的内在规律和价值信息,通过对医保目录等级分布、药品剂型构成、生产企业竞争格局、报销限制条款文本挖掘、中西药占比分析等十余个细分功能模块的系统实现,为医保政策制定者、医疗机构管理者以及相关研究人员提供全面、准确、实时的数据分析支持和决策依据,充分展现了大数据技术在医疗保险领域的实际应用价值和广阔前景。

基于大数据的北京市医保药品数据分析系统-选题背景意义

选题背景
随着我国医疗保障体系的不断完善和深化改革,医保药品目录管理已成为医保基金合理使用和患者医疗负担控制的关键环节。根据国家医保局发布的统计数据显示,截至2023年,全国基本医保参保人数已超过13.4亿人,医保基金年支出规模突破2.5万亿元,其中药品费用占比约为30%。北京市作为全国医疗资源最为集中的地区之一,拥有医保定点医疗机构超过3000家,医保药品品种数量庞大且结构复杂,涉及上万种不同规格的药品信息。传统的医保药品管理主要依靠人工统计和简单的数据库查询,面对如此海量且多维度的药品数据,传统方式已难以满足精细化管理和深度分析的需求。医保管理部门迫切需要运用大数据技术手段,对药品目录结构、企业分布、报销政策等多个维度进行全方位的数据挖掘和智能分析,以支撑医保政策的科学制定和优化调整。
选题意义
本课题的研究具有重要的理论价值和实践意义,为医保管理信息化和智能化发展提供了有力的技术支撑。从管理决策层面来看,该系统能够帮助医保管理部门深入了解药品目录的整体结构特征,准确掌握不同等级药品的分布规律,为医保目录的动态调整和优化提供科学依据,有效提升医保基金的使用效率和保障水平。从医疗机构角度而言,系统提供的药品生产企业竞争格局分析和报销限制策略解读,能够指导医院合理配置药品资源,优化用药结构,降低患者医疗费用负担。从技术创新维度分析,该系统将传统的医保数据管理与现代大数据技术深度融合,通过Hadoop分布式存储和Spark并行计算实现海量数据的高效处理,运用机器学习算法挖掘药品属性间的潜在关联,为医疗大数据领域的技术应用提供了典型案例和实践经验。同时,系统对中药配方颗粒等特色药品的专项分析,体现了对传统中医药现代化发展的有力支持,对推动中西医结合的医保政策制定具有积极的促进作用。

基于大数据的北京市医保药品数据分析系统-技术选型

大数据框架:Hadoop+Spark(本次没用Hive,支持定制)
开发语言:Python+Java(两个版本都支持)
后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)(两个版本都支持)
前端:Vue+ElementUI+Echarts+HTML+CSS+JavaScript+jQuery
详细技术点:Hadoop、HDFS、Spark、Spark SQL、Pandas、NumPy
数据库:MySQL

基于大数据的北京市医保药品数据分析系统-视频展示

毕设季的救命稻草:基于Hadoop的北京市医保药品数据分析系统让你不再迷茫

基于大数据的北京市医保药品数据分析系统-图片展示

在这里插入图片描述
大屏上
在这里插入图片描述
大屏下
在这里插入图片描述
登录
在这里插入图片描述
药品核心属性分析
在这里插入图片描述
药品生产厂家分析
在这里插入图片描述
药品数据挖掘分析
在这里插入图片描述
医保报销策略分析
在这里插入图片描述
医保药品信息
在这里插入图片描述
中药及颗粒分析

基于大数据的北京市医保药品数据分析系统-代码展示

# 核心功能1:药品核心属性分布维度分析

def analyze_drug_core_attributes(self):

    # 获取所有药品数据

    drugs_df = pd.read_sql("SELECT * FROM drug_info", connection=self.db_connection)

    # 使用Spark SQL进行医保目录等级分布分析

    spark_df = self.spark.createDataFrame(drugs_df)

    spark_df.createOrReplaceTempView("drugs")

    # 统计各医保等级分布

    insurance_level_stats = self.spark.sql("""

        SELECT insurance_level, 

               COUNT(*) as count,

               ROUND(COUNT(*) * 100.0 / (SELECT COUNT(*) FROM drugs), 2) as percentage

        FROM drugs 

        WHERE insurance_level IS NOT NULL 

        GROUP BY insurance_level 

        ORDER BY count DESC

    """).collect()

    # 分析药品剂型分布

    dosage_form_stats = self.spark.sql("""

        SELECT dosage_form,

               COUNT(*) as drug_count,

               AVG(self_payment_ratio) as avg_payment_ratio

        FROM drugs 

        WHERE dosage_form IS NOT NULL 

        GROUP BY dosage_form 

        HAVING COUNT(*) >= 10

        ORDER BY drug_count DESC

    """).collect()

    # 计算不同医保等级的自付比例统计

    payment_ratio_analysis = self.spark.sql("""

        SELECT insurance_level,

               AVG(self_payment_ratio) as avg_ratio,

               MIN(self_payment_ratio) as min_ratio,

               MAX(self_payment_ratio) as max_ratio,

               STDDEV(self_payment_ratio) as std_ratio

        FROM drugs 

        WHERE insurance_level IS NOT NULL AND self_payment_ratio IS NOT NULL

        GROUP BY insurance_level

    """).collect()

    # 分析按固定比例支付的药品

    fixed_payment_drugs = self.spark.sql("""

        SELECT COUNT(*) as fixed_payment_count,

               AVG(self_payment_ratio) as avg_fixed_ratio

        FROM drugs 

        WHERE fixed_payment_flag = 1

    """).collect()

    # 统计高频核心药品

    high_frequency_drugs = self.spark.sql("""

        SELECT generic_name,

               COUNT(*) as frequency,

               COUNT(DISTINCT manufacturer) as manufacturer_count

        FROM drugs 

        WHERE generic_name IS NOT NULL

        GROUP BY generic_name 

        HAVING COUNT(*) >= 5

        ORDER BY frequency DESC

        LIMIT 20

    """).collect()

    # 使用Pandas进行深度数据处理

    drugs_pandas = drugs_df.copy()

    drugs_pandas['payment_burden_level'] = pd.cut(

        drugs_pandas['self_payment_ratio'], 

        bins=[0, 0.1, 0.3, 0.6, 1.0], 

        labels=['低负担', '中低负担', '中高负担', '高负担']

    )

    burden_distribution = drugs_pandas.groupby(['insurance_level', 'payment_burden_level']).size().unstack(fill_value=0)

    # 计算各剂型在不同医保等级中的分布权重

    dosage_insurance_matrix = drugs_pandas.pivot_table(

        index='dosage_form', 

        columns='insurance_level', 

        values='drug_id', 

        aggfunc='count', 

        fill_value=0

    )

    result_data = {

        'insurance_level_distribution': [row.asDict() for row in insurance_level_stats],

        'dosage_form_analysis': [row.asDict() for row in dosage_form_stats],

        'payment_ratio_stats': [row.asDict() for row in payment_ratio_analysis],

        'fixed_payment_summary': fixed_payment_drugs[0].asDict() if fixed_payment_drugs else {},

        'high_frequency_drugs': [row.asDict() for row in high_frequency_drugs],

        'burden_distribution_matrix': burden_distribution.to_dict(),

        'dosage_insurance_correlation': dosage_insurance_matrix.to_dict()

    }

    return result_data

# 核心功能2:药品生产企业维度分析

def analyze_pharmaceutical_companies(self):

    # 构建Spark DataFrame进行企业分析

    companies_df = pd.read_sql("""

        SELECT manufacturer, insurance_level, dosage_form, generic_name, 

               self_payment_ratio, reimbursement_restriction

        FROM drug_info 

        WHERE manufacturer IS NOT NULL AND manufacturer != '无'

    """, connection=self.db_connection)

    spark_companies_df = self.spark.createDataFrame(companies_df)

    spark_companies_df.createOrReplaceTempView("company_drugs")

    # 分析企业市场份额(按药品数量)

    company_market_share = self.spark.sql("""

        SELECT manufacturer,

               COUNT(*) as drug_count,

               COUNT(DISTINCT generic_name) as unique_drug_count,

               ROUND(COUNT(*) * 100.0 / (SELECT COUNT(*) FROM company_drugs), 3) as market_share_percentage

        FROM company_drugs

        GROUP BY manufacturer

        ORDER BY drug_count DESC

        LIMIT 30

    """).collect()

    # 获取排名前10的龙头企业

    top_companies = [row.manufacturer for row in company_market_share[:10]]

    top_companies_filter = "', '".join(top_companies)

    # 分析龙头企业产品组合策略

    top_company_portfolio = self.spark.sql(f"""

        SELECT manufacturer, insurance_level,

               COUNT(*) as product_count,

               AVG(self_payment_ratio) as avg_payment_ratio,

               ROUND(COUNT(*) * 100.0 / SUM(COUNT(*)) OVER (PARTITION BY manufacturer), 2) as portfolio_percentage

        FROM company_drugs

        WHERE manufacturer IN ('{top_companies_filter}')

        GROUP BY manufacturer, insurance_level

        ORDER BY manufacturer, product_count DESC

    """).collect()

    # 分析龙头企业剂型专业化程度

    company_dosage_specialization = self.spark.sql(f"""

        SELECT manufacturer, dosage_form,

               COUNT(*) as dosage_count,

               ROUND(COUNT(*) * 100.0 / SUM(COUNT(*)) OVER (PARTITION BY manufacturer), 2) as specialization_rate

        FROM company_drugs

        WHERE manufacturer IN ('{top_companies_filter}')

        GROUP BY manufacturer, dosage_form

        HAVING COUNT(*) >= 3

        ORDER BY manufacturer, dosage_count DESC

    """).collect()

    # 计算企业产品多样性指数(基于Shannon熵)

    company_diversity_analysis = []

    for company in top_companies:

        company_data = companies_df[companies_df['manufacturer'] == company]

        dosage_counts = company_data['dosage_form'].value_counts()

        total_products = len(company_data)

        # 计算Shannon多样性指数

        shannon_entropy = 0

        for count in dosage_counts:

            proportion = count / total_products

            if proportion > 0:

                shannon_entropy -= proportion * np.log2(proportion)

        # 计算Simpson多样性指数

        simpson_index = sum((count/total_products)**2 for count in dosage_counts)

        simpson_diversity = 1 - simpson_index

        company_diversity_analysis.append({

            'manufacturer': company,

            'total_products': total_products,

            'dosage_variety': len(dosage_counts),

            'shannon_diversity': round(shannon_entropy, 3),

            'simpson_diversity': round(simpson_diversity, 3)

        })

    # 分析企业在不同医保等级的定位策略

    company_strategy_matrix = companies_df.groupby(['manufacturer', 'insurance_level']).size().unstack(fill_value=0)

    # 计算企业平均自付比例和产品定位

    company_positioning = self.spark.sql(f"""

        SELECT manufacturer,

               AVG(self_payment_ratio) as avg_self_payment,

               MIN(self_payment_ratio) as min_self_payment,

               MAX(self_payment_ratio) as max_self_payment,

               COUNT(CASE WHEN self_payment_ratio <= 0.1 THEN 1 END) as low_burden_products,

               COUNT(CASE WHEN self_payment_ratio > 0.5 THEN 1 END) as high_burden_products

        FROM company_drugs

        WHERE manufacturer IN ('{top_companies_filter}') AND self_payment_ratio IS NOT NULL

        GROUP BY manufacturer

        ORDER BY avg_self_payment

    """).collect()

    # 识别有报销限制的企业产品分布

    restricted_products_analysis = self.spark.sql(f"""

        SELECT manufacturer,

               COUNT(*) as total_products,

               COUNT(CASE WHEN reimbursement_restriction IS NOT NULL AND reimbursement_restriction != '' THEN 1 END) as restricted_products,

               ROUND(COUNT(CASE WHEN reimbursement_restriction IS NOT NULL AND reimbursement_restriction != '' THEN 1 END) * 100.0 / COUNT(*), 2) as restriction_rate

        FROM company_drugs

        WHERE manufacturer IN ('{top_companies_filter}')

        GROUP BY manufacturer

        ORDER BY restriction_rate DESC

    """).collect()

    return {

        'market_share_ranking': [row.asDict() for row in company_market_share],

        'top_company_portfolio': [row.asDict() for row in top_company_portfolio],

        'dosage_specialization': [row.asDict() for row in company_dosage_specialization],

        'diversity_analysis': company_diversity_analysis,

        'company_positioning': [row.asDict() for row in company_positioning],

        'restriction_analysis': [row.asDict() for row in restricted_products_analysis],

        'strategy_matrix': company_strategy_matrix.to_dict()

    }

# 核心功能3:基于算法的药品关联与聚类探索性分析

def perform_advanced_analytics(self):

    # 准备机器学习分析的数据集

    ml_dataset = pd.read_sql("""

        SELECT drug_id, generic_name, manufacturer, dosage_form, insurance_level, 

               self_payment_ratio, reimbursement_restriction, fixed_payment_flag

        FROM drug_info 

        WHERE manufacturer IS NOT NULL AND dosage_form IS NOT NULL 

        AND insurance_level IS NOT NULL AND self_payment_ratio IS NOT NULL

    """, connection=self.db_connection)

    # 数据预处理和特征工程

    from sklearn.preprocessing import LabelEncoder, StandardScaler

    from sklearn.cluster import KMeans

    from sklearn.feature_extraction.text import TfidfVectorizer

    from mlxtend.frequent_patterns import apriori, association_rules

    # 对分类变量进行编码

    label_encoders = {}

    categorical_features = ['dosage_form', 'insurance_level', 'manufacturer']

    for feature in categorical_features:

        le = LabelEncoder()

        ml_dataset[f'{feature}_encoded'] = le.fit_transform(ml_dataset[feature])

        label_encoders[feature] = le

    # 创建关联规则分析的事务数据

    transaction_data = ml_dataset[['dosage_form', 'insurance_level']].copy()

    transaction_encoded = pd.get_dummies(transaction_data)

    # 使用Apriori算法挖掘剂型-医保等级关联规则

    frequent_itemsets = apriori(transaction_encoded, min_support=0.05, use_colnames=True)

    if len(frequent_itemsets) > 0:

        dosage_insurance_rules = association_rules(

            frequent_itemsets, 

            metric="confidence", 

            min_threshold=0.6

        )

        dosage_insurance_rules['lift_score'] = dosage_insurance_rules['lift']

        dosage_insurance_rules = dosage_insurance_rules.sort_values('confidence', ascending=False)

    else:

        dosage_insurance_rules = pd.DataFrame()

    # 分析生产企业-医保等级关联规则(选择头部企业)

    top_manufacturers = ml_dataset['manufacturer'].value_counts().head(15).index.tolist()

    manufacturer_data = ml_dataset[ml_dataset['manufacturer'].isin(top_manufacturers)][['manufacturer', 'insurance_level']]

    manufacturer_encoded = pd.get_dummies(manufacturer_data)

    manufacturer_frequent = apriori(manufacturer_encoded, min_support=0.03, use_colnames=True)

    if len(manufacturer_frequent) > 0:

        manufacturer_rules = association_rules(

            manufacturer_frequent,

            metric="confidence",

            min_threshold=0.5

        )

        manufacturer_rules = manufacturer_rules.sort_values('lift', ascending=False)

    else:

        manufacturer_rules = pd.DataFrame()

    # 药品聚类分析特征准备

    clustering_features = ml_dataset[['self_payment_ratio', 'insurance_level_encoded', 'dosage_form_encoded']].copy()

    # 处理报销限制特征

    ml_dataset['has_restriction'] = ml_dataset['reimbursement_restriction'].apply(

        lambda x: 1 if pd.notna(x) and str(x).strip() != '' else 0

    )

    clustering_features['has_restriction'] = ml_dataset['has_restriction']

    clustering_features['fixed_payment_flag'] = ml_dataset['fixed_payment_flag'].fillna(0)

    # 标准化特征

    scaler = StandardScaler()

    clustering_features_scaled = scaler.fit_transform(clustering_features)

    # 执行K-Means聚类分析

    optimal_k = 5  # 可以通过肘部法则确定

    kmeans = KMeans(n_clusters=optimal_k, random_state=42, n_init=10)

    cluster_labels = kmeans.fit_predict(clustering_features_scaled)

    # 分析聚类结果

    ml_dataset['cluster_label'] = cluster_labels

    cluster_analysis = []

    for cluster_id in range(optimal_k):

        cluster_data = ml_dataset[ml_dataset['cluster_label'] == cluster_id]

        cluster_stats = {

            'cluster_id': cluster_id,

            'size': len(cluster_data),

            'avg_self_payment': round(cluster_data['self_payment_ratio'].mean(), 3),

            'dominant_insurance_level': cluster_data['insurance_level'].mode().iloc[0] if len(cluster_data) > 0 else 'unknown',

            'dominant_dosage_form': cluster_data['dosage_form'].mode().iloc[0] if len(cluster_data) > 0 else 'unknown',

            'restriction_rate': round(cluster_data['has_restriction'].mean() * 100, 2),

            'top_manufacturers': cluster_data['manufacturer'].value_counts().head(3).to_dict()

        }

        cluster_analysis.append(cluster_stats)

    # 报销限制说明文本聚类分析

    restriction_texts = ml_dataset[ml_dataset['reimbursement_restriction'].notna()]['reimbursement_restriction'].tolist()

    if len(restriction_texts) > 10:

        # TF-IDF向量化

        tfidf_vectorizer = TfidfVectorizer(max_features=100, stop_words=None, ngram_range=(1, 2))

        tfidf_matrix = tfidf_vectorizer.fit_transform(restriction_texts)

        # 对文本进行聚类

        text_kmeans = KMeans(n_clusters=4, random_state=42)

        text_clusters = text_kmeans.fit_predict(tfidf_matrix)

        # 分析文本聚类结果

        text_cluster_analysis = []

        for cluster_id in range(4):

            cluster_texts = [restriction_texts[i] for i, label in enumerate(text_clusters) if label == cluster_id]

            text_cluster_analysis.append({

                'cluster_id': cluster_id,

                'size': len(cluster_texts),

                'sample_texts': cluster_texts[:3] if cluster_texts else []

            })

    else:

        text_cluster_analysis = []

    # 特征重要性分析

    feature_importance = {

        'self_payment_ratio_variance': float(np.var(clustering_features['self_payment_ratio'])),

        'insurance_level_distribution': ml_dataset['insurance_level'].value_counts(normalize=True).to_dict(),

        'dosage_form_diversity': len(ml_dataset['dosage_form'].unique()),

        'restriction_prevalence': float(ml_dataset['has_restriction'].mean())

    }

    return {

        'dosage_insurance_association_rules': dosage_insurance_rules.head(10).to_dict('records') if not dosage_insurance_rules.empty else [],

        'manufacturer_association_rules': manufacturer_rules.head(10).to_dict('records') if not manufacturer_rules.empty else [],

        'drug_clustering_results': cluster_analysis,

        'text_clustering_analysis': text_cluster_analysis,

        'feature_importance_analysis': feature_importance,

        'clustering_model_performance': {

            'inertia': float(kmeans.inertia_),

            'cluster_centers': kmeans.cluster_centers_.tolist()

        }

    }

基于大数据的北京市医保药品数据分析系统-结语

🌟 欢迎:点赞 👍 收藏 ⭐ 评论 📝
👇🏻 精选专栏推荐 👇🏻 欢迎订阅关注!
大数据实战项目
PHP|C#.NET|Golang实战项目
微信小程序|安卓实战项目
Python实战项目
Java实战项目
🍅 ↓↓主页获取源码联系↓↓🍅

Logo

展示您要展示的活动信息

更多推荐