微信统计数据的获取及存储

2023-12-16 02:58

本文主要是介绍微信统计数据的获取及存储,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录:

1、需求:

2、开发前期准备:

3、代码结构:

4、Mysql数据表设计:

5、代码展示:

6、结果展示:

———————————————————————————————-

1、需求:

熟悉获取微信统计数据的接口,并设计数据获取方案,微信数据接口文档地址:https://mp.weixin.qq.com/wiki/15/88726a421bfc54654a3095821c3ca3bb.html
这里写图片描述

2、开发前期准备:

1、ACCESS_TOKEN:获取微信唯一标识ACCESS_TOKEN,同时也是公众号的全局唯一票据
2、需要公司公众号的AppID和AppSecret从而得到ACCESS_TOKEN
3、需要Mysql的建表权限
4、学习步鄹:weiXinEntry.py–>DataAnalysis.py–>MysqlUtils.py

3、代码结构:

这里写图片描述

4、Mysql数据表设计:

WeiXinSQL

CREATE TABLE getUser(id int not null auto_increment,datasource VARCHAR(45) null,ref_date VARCHAR(45) null,user_source INT NULL,new_user INT NULL,cancel_user INT NULL,cumulate_user INT NULL,primary key (id,datasource))ENGINE=InnoDB DEFAULT CHARSET=utf8;INSERT INTO `getUser` (`datasource`, `ref_date`, `user_source`, `new_user` , `cancel_user` , `cumulate_user`) 
VALUES  ('getusersummary','2016-12-02',0,0,0,0)CREATE TABLE getArticle(id int not null auto_increment,datasource VARCHAR(45) null,ref_date VARCHAR(45) null,ref_hour INT NULL,stat_date  VARCHAR(45) null,msgid  VARCHAR(45) null,title VARCHAR(45) null,int_page_read_user  INT NULL,int_page_read_count  INT NULL,ori_page_read_user  INT NULL,ori_page_read_count  INT NULL,share_scene  INT NULL,share_user  INT NULL,share_count  INT NULL,add_to_fav_user  INT NULL,add_to_fav_count   INT NULL,target_user   INT NULL,user_source   INT NULL,primary key (id,datasource))ENGINE=InnoDB DEFAULT CHARSET=utf8;INSERT INTO `getArticle` (`datasource`,`title`,`ref_date`,`ref_hour`,`stat_date`,`msgid`,`int_page_read_user`,`int_page_read_count`,`ori_page_read_user`,`ori_page_read_count`,`share_scene`,`share_user`,`share_count`,`add_to_fav_user`,`add_to_fav_count`,`target_user`,`user_source`)
VALUES  ('getuserreadhour','','2017-01-03',1500,'0','0',1,2,0,0,0,0,0,0,0,0,5)CREATE TABLE getInterface(id int not null auto_increment,datasource VARCHAR(45) null,ref_date VARCHAR(45) null,ref_hour  INT NULL,callback_count INT NULL,fail_count INT NULL,total_time_cost INT NULL,max_time_cost INT NULL,primary key (id,datasource))ENGINE=InnoDB DEFAULT CHARSET=utf8;INSERT INTO `getInterface` (`datasource`,`ref_date`,`ref_hour`,`callback_count`,`fail_count`,`total_time_cost`,`max_time_cost`) 
VALUES  ('getinterfacesummary','2017-01-03',0,3,0,950,340)CREATE TABLE getupStreammsg(id int not null auto_increment,datasource VARCHAR(45) null,ref_date VARCHAR(45) null,ref_hour  INT NULL,msg_type  INT NULL,msg_user  INT NULL,msg_count  INT NULL,count_interval  INT NULL,int_page_read_count  INT NULL,ori_page_read_user  INT NULL,primary key (id,datasource))ENGINE=InnoDB DEFAULT CHARSET=utf8;INSERT INTO `getupStreammsg` (`datasource`,`ref_date`,`ref_hour`,`msg_type`,`msg_user`,`msg_count`,`count_interval`,`int_page_read_count`,`ori_page_read_user`) 
VALUES  ('getupstreammsg','2016-12-01',0,1,1,101,0,0,0)

5、代码展示:

weiXinEntry.py

#!/usr/bin/python
# -*- coding: UTF-8 -*-import datetime #导入日期时间模块
import json
import urllib2import DataAnalysisdef get_content(url):'获取网页内容'html=urllib2.urlopen(url)content=html.read()html.close()return contentdef getAccess_token(AppID,AppSecret):'获取微信唯一标识ACCESS_TOKEN,access_token是公众号的全局唯一票据'url="https://api.weixin.qq.com/cgi-bin/token?grant_type=client_credential&appid="+AppID+"&secret="+AppSecretinfo=get_content(url)access_token=json.loads(info)["access_token"]return access_tokenif __name__ == "__main__":#公司公众号AppID="公司的AppID"AppSecret="公司的AppSecret"ACCESS_TOKEN=getAccess_token(AppID,AppSecret)print(ACCESS_TOKEN)startDay="2016-12-25"endDay="2017-01-16"StartDay=datetime.datetime.strptime(startDay, "%Y-%m-%d").date()EndDay=datetime.datetime.strptime(endDay, "%Y-%m-%d").date()countdays=(EndDay-StartDay).dayscount=0dataAnalysis = DataAnalysis.getDataAnalysis()while (count < countdays):FirstDay=StartDay+datetime.timedelta(days=count)print("FirstDay  :  "+str(FirstDay) )dataAnalysis.getUser(ACCESS_TOKEN,FirstDay,FirstDay)dataAnalysis.getArticle(ACCESS_TOKEN,FirstDay,FirstDay)dataAnalysis.getupstreammsg(ACCESS_TOKEN,FirstDay,FirstDay)dataAnalysis.getInterface(ACCESS_TOKEN,FirstDay,FirstDay)count = count + 1

DataAnalysis.py

#!/usr/bin/python
# -*- coding: UTF-8 -*-import json
import requests
import MysqlUtilsclass getDataAnalysis:# "微信数据获取"def getUserInsert(self,name , datasource):#getUser中包含的字段tup2 = ("user_source","new_user","cancel_user","cumulate_user");try:for i in datasource['list']:for tup in tup2:if i.has_key(tup) :continueelse :i[tup] = 0sql = "INSERT INTO `getUser` (`datasource`, `ref_date`, `user_source`, `new_user` , `cancel_user` , `cumulate_user`) " \"VALUES  ('%s','%s',%d,%d,%d,%d)"  %(name,i['ref_date'],i['user_source'],i['new_user'],i['cancel_user'],i['cumulate_user'])print(sql)MysqlUtils.dbexecute(sql)except:print("ERROR ----  " ,name ,datasource)def getUser(self,access_token,begin_date,end_date):"用户分析数据接口"r_date={'begin_date': str(begin_date), 'end_date': str(end_date)}#获取用户增减数据summary = requests.post("https://api.weixin.qq.com/datacube/getusersummary?access_token="+access_token, data=json.dumps(r_date))getusersummary = json.loads(summary.content)getDataAnalysis().getUserInsert("getusersummary",getusersummary)#获取累计用户数据cumulate = requests.post("https://api.weixin.qq.com/datacube/getusercumulate?access_token="+access_token, data=json.dumps(r_date))getusercumulate = json.loads(cumulate.content)getDataAnalysis().getUserInsert("getusercumulate",getusercumulate)def getArticledetailsInsert(self,name , datasource):tup2 = ("msgid","title");details = ("stat_date","target_user","int_page_read_user","int_page_read_count","ori_page_read_user","ori_page_read_count","share_user","share_count","add_to_fav_user","add_to_fav_count");try:for i in datasource['list']:for tup in tup2:if i.has_key(tup) :continueelse :i[tup] = 0for j in i['details']:for detail in details:if j.has_key(detail) :continueelse :j[detail] = 0sql = "INSERT INTO `getArticle` (`datasource`,`title`,`ref_date`,`stat_date`,`msgid`,`int_page_read_user`,`int_page_read_count`,`ori_page_read_user`," \"`ori_page_read_count`,`share_user`,`share_count`,`add_to_fav_user`,`add_to_fav_count`,`target_user`) " \"VALUES  ('%s','%s','%s','%s','%s',%d,%d,%d,%d,%d,%d,%d,%d,%d)"  %(name,i['title'],i['ref_date'],j['stat_date'],i['msgid'],j['int_page_read_user'],j['int_page_read_count'],j['ori_page_read_user'],j['ori_page_read_count'],j['share_user'],j['share_count'],j['add_to_fav_user'],j['add_to_fav_count'],j['target_user'])print sqlMysqlUtils.dbexecute(sql)except:print("ERROR ----  " ,name ,datasource)def getArticleInsert(self,name , datasource):#getUser中包含的字段tup2 = ("ref_hour","stat_date","msgid","int_page_read_user""int_page_read_count","ori_page_read_user","ori_page_read_count","share_scene","share_user","share_count","add_to_fav_user","add_to_fav_count","target_user","user_source");try:for i in datasource['list']:for tup in tup2:if i.has_key(tup) :continueelse :i[tup] = 0if i.has_key("title") :continueelse :i["title"] = ""sql = "INSERT INTO `getArticle` (`datasource`,`title`,`ref_date`,`ref_hour`,`stat_date`,`msgid`,`int_page_read_user`,`int_page_read_count`,`ori_page_read_user`," \"`ori_page_read_count`,`share_scene`,`share_user`,`share_count`,`add_to_fav_user`,`add_to_fav_count`,`target_user`,`user_source`) " \"VALUES  ('%s','%s','%s',%d,'%s','%s',%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d)"  %(name,i['title'],i['ref_date'],i['ref_hour'],i['stat_date'],i['msgid'],i['int_page_read_user'],i['int_page_read_count'],i['ori_page_read_user'],i['ori_page_read_count'],i['share_scene'],i['share_user'],i['share_count'],i['add_to_fav_user'],i['add_to_fav_count'],i['target_user'],i['user_source'])print sqlMysqlUtils.dbexecute(sql)except:print("ERROR ----  " ,name ,datasource)def getArticle(self,access_token,begin_date,end_date):"图文分析数据接口"r_date={'begin_date': str(begin_date), 'end_date': str(end_date)}#获取图文群发每日数据     OKsummary = requests.post("https://api.weixin.qq.com/datacube/getarticlesummary?access_token="+access_token, data=json.dumps(r_date))getarticlesummary = json.loads(summary.content)getDataAnalysis().getArticleInsert("getarticlesummary",getarticlesummary)#获取图文群发总数据       有detailstotal = requests.post("https://api.weixin.qq.com/datacube/getarticletotal?access_token="+access_token, data=json.dumps(r_date))getarticletotal = json.loads(total.content)getDataAnalysis().getArticledetailsInsert("getarticletotal",getarticletotal)#获取图文统计数据        OKread = requests.post("https://api.weixin.qq.com/datacube/getuserread?access_token="+access_token, data=json.dumps(r_date))getuserread = json.loads(read.content)getDataAnalysis().getArticleInsert("getuserread",getuserread)# #获取图文统计分时数据        OKhour = requests.post("https://api.weixin.qq.com/datacube/getuserreadhour?access_token="+access_token, data=json.dumps(r_date))getuserreadhour = json.loads(hour.content)getDataAnalysis().getArticleInsert("getuserreadhour",getuserreadhour)# #获取图文分享转发数据share = requests.post("https://api.weixin.qq.com/datacube/getusershare?access_token="+access_token, data=json.dumps(r_date))getusershare = json.loads(share.content)getDataAnalysis().getArticleInsert("getusershare",getusershare)# #获取图文分享转发分时数据sharehour = requests.post("https://api.weixin.qq.com/datacube/getusersharehour?access_token="+access_token, data=json.dumps(r_date))getusersharehour = json.loads(sharehour.content)getDataAnalysis().getArticleInsert("getusersharehour",getusersharehour)def getupstreammsgInsert(self,name , datasource):#getUser中包含的字段tup2 = ("ref_hour","msg_type","msg_user","msg_count","count_interval","int_page_read_count","ori_page_read_user");try:for i in datasource['list']:for tup in tup2:if i.has_key(tup) :continueelse :i[tup] = 0sql = "INSERT INTO `getupStreammsg` (`datasource`,`ref_date`,`ref_hour`,`msg_type`,`msg_user`,`msg_count`,`count_interval`,`int_page_read_count`,`ori_page_read_user`) " \"VALUES  ('%s','%s',%d,%d,%d,%d,%d,%d,%d)"  %(name,i['ref_date'],i['ref_hour'],i['msg_type'],i['msg_user'],i['msg_count'],i['count_interval'],i['int_page_read_count'],i['ori_page_read_user'])print sqlMysqlUtils.dbexecute(sql)except:print("ERROR ----  " ,name ,datasource)def getupstreammsg(self,access_token,begin_date,end_date):"消息分析数据接口"r_date={'begin_date': str(begin_date), 'end_date': str(end_date)}#获取消息发送概况数据streammsg = requests.post("https://api.weixin.qq.com/datacube/getupstreammsg?access_token="+access_token, data=json.dumps(r_date))getupstreammsg = json.loads(streammsg.content)getDataAnalysis().getupstreammsgInsert("getupstreammsg",getupstreammsg)#获取消息分送分时数据streammsghour = requests.post("https://api.weixin.qq.com/datacube/getupstreammsghour?access_token="+access_token, data=json.dumps(r_date))getupstreammsghour = json.loads(streammsghour.content)getDataAnalysis().getupstreammsgInsert("getupstreammsghour",getupstreammsghour)#获取消息发送周数据streammsgweek = requests.post("https://api.weixin.qq.com/datacube/getupstreammsgweek?access_token="+access_token, data=json.dumps(r_date))getupstreammsgweek = json.loads(streammsgweek.content)getDataAnalysis().getupstreammsgInsert("getupstreammsgweek",getupstreammsgweek)#获取消息发送月数据streammsgmonth = requests.post("https://api.weixin.qq.com/datacube/getupstreammsgmonth?access_token="+access_token, data=json.dumps(r_date))getupstreammsgmonth = json.loads(streammsgmonth.content)getDataAnalysis().getupstreammsgInsert("getupstreammsgmonth",getupstreammsgmonth)#获取消息发送分布数据streammsgdist = requests.post("https://api.weixin.qq.com/datacube/getupstreammsgdist?access_token="+access_token, data=json.dumps(r_date))getupstreammsgdist = json.loads(streammsgdist.content)getDataAnalysis().getupstreammsgInsert("getupstreammsgdist",getupstreammsgdist)#获取消息发送分布周数据streammsgdistweek = requests.post("https://api.weixin.qq.com/datacube/getupstreammsgdistweek?access_token="+access_token, data=json.dumps(r_date))getupstreammsgdistweek = json.loads(streammsgdistweek.content)getDataAnalysis().getupstreammsgInsert("getupstreammsgdistweek",getupstreammsgdistweek)#获取消息发送分布月数据streammsgdistmonth = requests.post("https://api.weixin.qq.com/datacube/getupstreammsgdistmonth?access_token="+access_token, data=json.dumps(r_date))getupstreammsgdistmonth = json.loads(streammsgdistmonth.content)getDataAnalysis().getupstreammsgInsert("getupstreammsgdistmonth",getupstreammsgdistmonth)def getInterfaceInsert(self,name , datasource):#getUser中包含的字段tup2 = ("ref_hour","callback_count","fail_count","total_time_cost","max_time_cost");try:for i in datasource['list']:for tup in tup2:if i.has_key(tup) :continueelse :i[tup] = 0sql = "INSERT INTO `getInterface` (`datasource`,`ref_date`,`ref_hour`,`callback_count`,`fail_count`,`total_time_cost`,`max_time_cost`) " \"VALUES  ('%s','%s',%d,%d,%d,%d,%d)"  %(name,i['ref_date'],i['ref_hour'],i['callback_count'],i['fail_count'],i['total_time_cost'],i['max_time_cost'])print sqlMysqlUtils.dbexecute(sql)except:print("ERROR ----  " ,name ,datasource)def getInterface(self,access_token,begin_date,end_date):"接口分析数据接口"r_date={'begin_date': str(begin_date), 'end_date': str(end_date)}#获取接口分析数据summary = requests.post("https://api.weixin.qq.com/datacube/getinterfacesummary?access_token="+access_token, data=json.dumps(r_date))getinterfacesummary = json.loads(summary.content)getDataAnalysis().getInterfaceInsert("getinterfacesummary",getinterfacesummary)#获取接口分析分时数据summaryhour = requests.post("https://api.weixin.qq.com/datacube/getinterfacesummaryhour?access_token="+access_token, data=json.dumps(r_date))getinterfacesummaryhour = json.loads(summaryhour.content)getDataAnalysis().getInterfaceInsert("getinterfacesummaryhour",getinterfacesummaryhour)

MysqlUtils.py

#!/usr/bin/python
# -*- coding: UTF-8 -*-import mysql.connector# 打开数据库连接
db=mysql.connector.connect(host='ip',user='root',passwd='123456',db='bi_data')# 使用cursor()方法获取操作游标
cursor=db.cursor()def dbexecute(sql):"插入数据 , 删除数据  "try:# 执行sql语句cursor.execute(sql)# 提交到数据库执行db.commit()except:# 发生错误时回滚db.rollback()#关闭数据库连接db.close()def dbquery(sql):"SQL 查询语句"try:# 执行SQL语句cursor.execute(sql)# 获取所有记录列表results = cursor.fetchall()return  resultsexcept:print "Error: unable to fecth data"def dbClose():# 关闭数据库连接cursor.close()if __name__ == "__main__":json = {"list":[{"ref_date":"2017-01-02","user_source":0,"cumulate_user":5}]}tup2 = ("user_source","new_user","cancel_user","cumulate_user");for i in json['list']:for tup in tup2:if i.has_key(tup) :print i.has_key('cancel_user')else :i[tup] = 0print(i)

6、结果展示:

这里写图片描述

这里写图片描述

       如果您喜欢我写的博文,读后觉得收获很大,不妨小额赞助我一下,让我有动力继续写出高质量的博文,感谢您的赞赏!!!

这篇关于微信统计数据的获取及存储的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/498855

相关文章

python获取指定名字的程序的文件路径的两种方法

《python获取指定名字的程序的文件路径的两种方法》本文主要介绍了python获取指定名字的程序的文件路径的两种方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要... 最近在做项目,需要用到给定一个程序名字就可以自动获取到这个程序在Windows系统下的绝对路径,以下

MyBatis-plus处理存储json数据过程

《MyBatis-plus处理存储json数据过程》文章介绍MyBatis-Plus3.4.21处理对象与集合的差异:对象可用内置Handler配合autoResultMap,集合需自定义处理器继承F... 目录1、如果是对象2、如果需要转换的是List集合总结对象和集合分两种情况处理,目前我用的MP的版本

SpringBoot 获取请求参数的常用注解及用法

《SpringBoot获取请求参数的常用注解及用法》SpringBoot通过@RequestParam、@PathVariable等注解支持从HTTP请求中获取参数,涵盖查询、路径、请求体、头、C... 目录SpringBoot 提供了多种注解来方便地从 HTTP 请求中获取参数以下是主要的注解及其用法:1

使用SpringBoot+InfluxDB实现高效数据存储与查询

《使用SpringBoot+InfluxDB实现高效数据存储与查询》InfluxDB是一个开源的时间序列数据库,特别适合处理带有时间戳的监控数据、指标数据等,下面详细介绍如何在SpringBoot项目... 目录1、项目介绍2、 InfluxDB 介绍3、Spring Boot 配置 InfluxDB4、I

Python获取浏览器Cookies的四种方式小结

《Python获取浏览器Cookies的四种方式小结》在进行Web应用程序测试和开发时,获取浏览器Cookies是一项重要任务,本文我们介绍四种用Python获取浏览器Cookies的方式,具有一定的... 目录什么是 Cookie?1.使用Selenium库获取浏览器Cookies2.使用浏览器开发者工具

Java获取当前时间String类型和Date类型方式

《Java获取当前时间String类型和Date类型方式》:本文主要介绍Java获取当前时间String类型和Date类型方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,... 目录Java获取当前时间String和Date类型String类型和Date类型输出结果总结Java获取

C#监听txt文档获取新数据方式

《C#监听txt文档获取新数据方式》文章介绍通过监听txt文件获取最新数据,并实现开机自启动、禁用窗口关闭按钮、阻止Ctrl+C中断及防止程序退出等功能,代码整合于主函数中,供参考学习... 目录前言一、监听txt文档增加数据二、其他功能1. 设置开机自启动2. 禁止控制台窗口关闭按钮3. 阻止Ctrl +

一文详解如何使用Java获取PDF页面信息

《一文详解如何使用Java获取PDF页面信息》了解PDF页面属性是我们在处理文档、内容提取、打印设置或页面重组等任务时不可或缺的一环,下面我们就来看看如何使用Java语言获取这些信息吧... 目录引言一、安装和引入PDF处理库引入依赖二、获取 PDF 页数三、获取页面尺寸(宽高)四、获取页面旋转角度五、判断

Spring Boot 结合 WxJava 实现文章上传微信公众号草稿箱与群发

《SpringBoot结合WxJava实现文章上传微信公众号草稿箱与群发》本文将详细介绍如何使用SpringBoot框架结合WxJava开发工具包,实现文章上传到微信公众号草稿箱以及群发功能,... 目录一、项目环境准备1.1 开发环境1.2 微信公众号准备二、Spring Boot 项目搭建2.1 创建

SpringBoot3.X 整合 MinIO 存储原生方案

《SpringBoot3.X整合MinIO存储原生方案》本文详细介绍了SpringBoot3.X整合MinIO的原生方案,从环境搭建到核心功能实现,涵盖了文件上传、下载、删除等常用操作,并补充了... 目录SpringBoot3.X整合MinIO存储原生方案:从环境搭建到实战开发一、前言:为什么选择MinI