Python使用zdppy_es国产框架操作Elasticsearch实现增删改查

本文主要是介绍Python使用zdppy_es国产框架操作Elasticsearch实现增删改查,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Python使用zdppy_es国产框架操作Elasticsearch实现增删改查

本套教程配套有录播课程和私教课程,欢迎私信我。

在这里插入图片描述

Docker部署ElasticSearch7

创建基本容器

docker run -itd --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms2g -Xmx2g"  elasticsearch:7.17.17

配置账号密码

容器中配置文件的路径:

/usr/share/elasticsearch/config/elasticsearch.yml

把配置文件复制出来:

# 准备目录
sudo mkdir -p /docker
sudo chmod 777 -R /docker
mkdir -p /docker/elasticsearch/config
mkdir -p /docker/elasticsearch/data
mkdir -p /docker/elasticsearch/log# 拷贝配置文件
docker cp elasticsearch:/usr/share/elasticsearch/config/elasticsearch.yml /docker/elasticsearch/config/elasticsearch.yml

将配置文件修改为如下内容:

cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
# 此处开启xpack
xpack.security.enabled: true

把本机的配置文件复制到容器里面:

docker cp /docker/elasticsearch/config/elasticsearch.yml elasticsearch:/usr/share/elasticsearch/config/elasticsearch.yml

重启ES服务:

docker restart elasticsearch

进入容器,设置es的密码:

docker exec -it elasticsearch bash
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

执行上面的命令以后,输入y,会有如下提示,全都输入:zhangdapeng520

Please confirm that you would like to continue [y/N]yEnter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Passwords do not match.
Try again.
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana_system]: 
Reenter password for [kibana_system]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

将会得到如下用户名和密码:

elastic
zhangdapeng520apm_system
zhangdapeng520kibana_system
zhangdapeng520logstash_system
zhangdapeng520beats_system
zhangdapeng520remote_monitoring_user
zhangdapeng520

在宿主机中测试是否成功:

# 不带用户名密码
curl localhost:9200# 带用户名密码
curl localhost:9200 -u elastic

建立连接

from es import Elasticsearchauth = ("elastic", "zhangdapeng520")
es = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)
print(es.info())

创建索引

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 删除索引
edb.indices.delete(index=index)

新增数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
edb.index(index=index,id="1",document={"id": 1,"name": "张三","age": 23,}
)# 删除索引
edb.indices.delete(index=index)

根据ID查询数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
edb.index(index=index,id="1",document={"id": 1,"name": "张三","age": 23,}
)# 查询数据
resp = edb.get(index=index, id="1")
print(resp["_source"])# 删除索引
edb.indices.delete(index=index)

批量新增数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 查询数据
resp = edb.get(index=index, id="1")
print(resp["_source"])
resp = edb.get(index=index, id="2")
print(resp["_source"])
resp = edb.get(index=index, id="3")
print(resp["_source"])# 删除索引
edb.indices.delete(index=index)

查询所有数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 查询数据
r = edb.search(index=index,query={"match_all": {}},
)
print(r)
print(r.get("hits").get("hits"))# 删除索引
edb.indices.delete(index=index)

提取搜索结果

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 查询数据
r = edb.search(index=index,query={"match_all": {}},
)def get_search_data(data):new_data = []# 提取第一层hits = r.get("hits")if hits is None:return new_data# 提取第二层hits = hits.get("hits")if hits is None:return new_data# 提取第三层for hit in hits:new_data.append(hit.get("_source"))return new_dataprint(get_search_data(r))# 删除索引
edb.indices.delete(index=index)

根据ID修改数据

import timefrom es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 修改
edb.update(index=index,id="1",doc={"id": "1","name": "张三333","age": 23,},
)# 查询数据
time.sleep(1)  # 等一会修改才会生效
r = edb.search(index=index,query={"match_all": {}},
)def get_search_data(data):new_data = []# 提取第一层hits = r.get("hits")if hits is None:return new_data# 提取第二层hits = hits.get("hits")if hits is None:return new_data# 提取第三层for hit in hits:new_data.append(hit.get("_source"))return new_dataprint(get_search_data(r))# 删除索引
edb.indices.delete(index=index)

根据ID删除数据

import timefrom es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 删除
edb.delete(index=index, id="1")# 查询数据
time.sleep(1)  # 等一会修改才会生效
r = edb.search(index=index,query={"match_all": {}},
)def get_search_data(data):new_data = []# 提取第一层hits = r.get("hits")if hits is None:return new_data# 提取第二层hits = hits.get("hits")if hits is None:return new_data# 提取第三层for hit in hits:new_data.append(hit.get("_source"))return new_dataprint(get_search_data(r))# 删除索引
edb.indices.delete(index=index)

这篇关于Python使用zdppy_es国产框架操作Elasticsearch实现增删改查的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/680021

相关文章

Conda与Python venv虚拟环境的区别与使用方法详解

《Conda与Pythonvenv虚拟环境的区别与使用方法详解》随着Python社区的成长,虚拟环境的概念和技术也在不断发展,:本文主要介绍Conda与Pythonvenv虚拟环境的区别与使用... 目录前言一、Conda 与 python venv 的核心区别1. Conda 的特点2. Python v

Spring Boot中WebSocket常用使用方法详解

《SpringBoot中WebSocket常用使用方法详解》本文从WebSocket的基础概念出发,详细介绍了SpringBoot集成WebSocket的步骤,并重点讲解了常用的使用方法,包括简单消... 目录一、WebSocket基础概念1.1 什么是WebSocket1.2 WebSocket与HTTP

C#中Guid类使用小结

《C#中Guid类使用小结》本文主要介绍了C#中Guid类用于生成和操作128位的唯一标识符,用于数据库主键及分布式系统,支持通过NewGuid、Parse等方法生成,感兴趣的可以了解一下... 目录前言一、什么是 Guid二、生成 Guid1. 使用 Guid.NewGuid() 方法2. 从字符串创建

Python使用python-can实现合并BLF文件

《Python使用python-can实现合并BLF文件》python-can库是Python生态中专注于CAN总线通信与数据处理的强大工具,本文将使用python-can为BLF文件合并提供高效灵活... 目录一、python-can 库:CAN 数据处理的利器二、BLF 文件合并核心代码解析1. 基础合

Python使用OpenCV实现获取视频时长的小工具

《Python使用OpenCV实现获取视频时长的小工具》在处理视频数据时,获取视频的时长是一项常见且基础的需求,本文将详细介绍如何使用Python和OpenCV获取视频时长,并对每一行代码进行深入解析... 目录一、代码实现二、代码解析1. 导入 OpenCV 库2. 定义获取视频时长的函数3. 打开视频文

golang版本升级如何实现

《golang版本升级如何实现》:本文主要介绍golang版本升级如何实现问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录golanwww.chinasem.cng版本升级linux上golang版本升级删除golang旧版本安装golang最新版本总结gola

SpringBoot中SM2公钥加密、私钥解密的实现示例详解

《SpringBoot中SM2公钥加密、私钥解密的实现示例详解》本文介绍了如何在SpringBoot项目中实现SM2公钥加密和私钥解密的功能,通过使用Hutool库和BouncyCastle依赖,简化... 目录一、前言1、加密信息(示例)2、加密结果(示例)二、实现代码1、yml文件配置2、创建SM2工具

Mysql实现范围分区表(新增、删除、重组、查看)

《Mysql实现范围分区表(新增、删除、重组、查看)》MySQL分区表的四种类型(范围、哈希、列表、键值),主要介绍了范围分区的创建、查询、添加、删除及重组织操作,具有一定的参考价值,感兴趣的可以了解... 目录一、mysql分区表分类二、范围分区(Range Partitioning1、新建分区表:2、分

MySQL 定时新增分区的实现示例

《MySQL定时新增分区的实现示例》本文主要介绍了通过存储过程和定时任务实现MySQL分区的自动创建,解决大数据量下手动维护的繁琐问题,具有一定的参考价值,感兴趣的可以了解一下... mysql创建好分区之后,有时候会需要自动创建分区。比如,一些表数据量非常大,有些数据是热点数据,按照日期分区MululbU

Python中你不知道的gzip高级用法分享

《Python中你不知道的gzip高级用法分享》在当今大数据时代,数据存储和传输成本已成为每个开发者必须考虑的问题,Python内置的gzip模块提供了一种简单高效的解决方案,下面小编就来和大家详细讲... 目录前言:为什么数据压缩如此重要1. gzip 模块基础介绍2. 基本压缩与解压缩操作2.1 压缩文