autoML 前瞻与实践 ---- H2O Sparkling Water简介

2023-10-09 15:30

本文主要是介绍autoML 前瞻与实践 ---- H2O Sparkling Water简介,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

文章大纲

  • 简介
  • Sparkling Water
    • 典型应用场景 Typical Use Case
    • Requirements
    • Installing and Starting
  • PySparkling
    • 安装
    • 典型应用场景
      • Model Building
      • Data Munging
      • Stream Processing
  • 参考文档
    • H2O Sparkling Water
    • PySparkling


简介

Sparkling Water allows users to combine the fast, scalable machine learning algorithms of H2O with the capabilities of Spark. With Sparkling Water, users can drive computation from Scala/R/Python and utilize the H2O Flow UI, providing an ideal machine learning platform for application developers.

Spark is an elegant and powerful general-purpose, open-source, in-memory platform with tremendous momentum. H2O is an in-memory application for machine learning that is reshaping how people apply math and predictive analytics to their business problems.

Integrating these two open-source environments provides a seamless experience for users who want to make a query using Spark SQL, feed the results into H2O to build a model and make predictions, and then use the results again in Spark. For any given problem, better interoperability between tools provides a better experience.

Sparkling Water is licensed under the Apache License, Version 2.0.

Sparking Water擅长利用现有的基于Spark的工作流,这些工作流需要调用高级机器学习算法。一个典型的例子是借助sparkapi进行数据挖掘,其中一个准备好的表被传递给H2O深度学习算法。构建的深度学习模型基于测试数据估计不同的度量,这些度量可用于Spark工作流的其余部分。


Sparkling Water

  • github :https://github.com/h2oai/sparkling-water

典型应用场景 Typical Use Case

Sparkling Water excels in leveraging existing Spark-based workflows that need to call advanced machine learning algorithms. A typical example involves data munging with the help of a Spark API, where a prepared table is passed to the H2O DeepLearning algorithm. The constructed DeepLearning model estimates different metrics based on the testing data, which can be used in the rest of the Spark workflow.

Sparking Water擅长利用现有的基于Spark的工作流,这些工作流需要调用高级机器学习算法。一个典型的例子是借助spark api进行数据挖掘, 使用一个准备好的表被传递给H2O进行深度学习算法自动学习。构建的深度学习模型基于测试数据估计不同的度量,这些度量可用于Spark工作流的其余部分。

Requirements

  • Linux/OS X/Windows

  • Java 1.8+

  • Python 2.7+ For Python version of Sparkling Water (PySparkling)

  • Spark 2.2 and SPARK_HOME shell variable must point to your local Spark installation

Installing and Starting


h2o 基本设计理念

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

Sparkling Water supports two types of backends: internal and external.

在这里插入图片描述

在这里插入图片描述


PySparkling

文档:
https://docs.h2o.ai/sparkling-water/2.3/latest-stable/doc/pysparkling.html#pysparkling

安装:
https://docs.h2o.ai/sparkling-water/3.1/latest-stable/doc/install/install_and_start.html#download-and-run-locally

代码样例:
https://github.com/h2oai/sparkling-water/tree/master/examples#step-by-step-weather-data-example

安装

Which H2O package should I install?

The H2O distribution zip file contains two Python installation artifacts (wheel files): h2o and h2o_client. You can install the full-featured “h2o” package that can be used in a standalone setup (as well as cluster deployment), or you can choose client-only version of the package - “h2o_client”.

  • h2o: Universal deployment package - can be used in standalone mode (eg. H2O started on users laptop) or it can be used to connect to an H2O cluster. This is what most users will choose to install.

  • h2o_client: A variant of the h2o package that doesn’t come with the H2O java code and cannot be used in standalone deployments. This version is suited especially for enterprise deployments where users are connecting to H2O clusters, and starting a standalone H2O instance on an edge node needs to be prevented.

Both packages provide identical APIs and sets of features.

PySparkling is an integration of Python with Sparkling Water. It allows the user to start H2O services on a Spark cluster from Python API.

In the PySparkling driver program, the Spark Context, which uses Py4J to start the driver JVM and the Java Spark Context, is used to create the H2O Context (hc). That, in turn, starts an H2O cloud (cluster) in the Spark ecosystem. Once the H2O cluster is up, the H2O Python package is used to interact with it and run H2O algorithms. All pure H2O calls are executed via H2O’s REST API interface. Users can easily integrate their regular PySpark workflow with H2O algorithms using PySparkling.

PySparkling programs can be launched as an application or in an interactive shell or notebook environment.

安装脚本

conda create -n myspark312 python=3.9
conda activate myspark312
conda install ipykernelpip install pyspark==3.1.2
pip install numpypip install h2o_pysparkling_3.1

我们安装后的主要目标是直接使用h2o 的api 调用pyspark 的 sparksession

先来看看源文件:

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#import h2o
import numbers
import warnings
from ai.h2o.sparkling.H2OConf import H2OConf
from ai.h2o.sparkling.Initializer import Initializer
from h2o.frame import H2OFrame
from h2o.utils.typechecks import assert_is_type, Enum
from pyspark.rdd import RDD
from pyspark.sql import SparkSession
from pyspark.sql.dataframe import DataFrame
from pyspark.sql.types import StringType, BooleanType, IntegerType, LongType, FloatTypeclass H2OContext(object):__isConnected = Falsedef __init__(self):"""This constructor is used just to initialize the environment. It does not start H2OContext.To start H2OContext use one of the getOrCreate methods. This constructor is internally used in those methods"""try:Initializer.load_sparkling_jar()except:raisedef __h2o_connect(h2o_context):schema = h2o_context._jhc.getConf().getScheme()conf = h2o_context._confkwargs = {}kwargs["https"] = schema == "https"kwargs["verify_ssl_certificates"] = conf.verifySslCertificates()kwargs["cacert"] = conf.sslCACert()if conf.userName() and conf.password():kwargs["auth"] = (conf.userName(), conf.password())url = "{}://{}:{}".format(schema, h2o_context._client_ip, h2o_context._client_port)if conf.contextPath() is not None:url = "{}/{}".format(url, conf.contextPath())return h2o.connect(url=url, **kwargs)@staticmethoddef getOrCreate(conf=None):"""Get existing or create new H2OContext based on provided H2O configuration. If the conf parameter is set thenconfiguration from it is used. Otherwise the configuration properties passed to Sparkling Water are used.If the values are not found the default values are used in most of the cases. The default cluster modeis internal, ie. spark.ext.h2o.external.cluster.mode=false:param conf: H2O configuration as instance of H2OConf:return:  instance of H2OContext"""# Workaround for bug in Spark 2.1 as SparkSession created in PySpark is not seen in Java# and call SparkSession.builder.getOrCreate on Java side creates a new session, which is not# desirableactiveSession = SparkSession._instantiatedSessionjvm = activeSession.sparkContext._jvmjvm.org.apache.spark.sql.SparkSession.setDefaultSession(activeSession._jsparkSession)if conf is not None:selected_conf = confelse:selected_conf = H2OConf()selected_conf.set("spark.ext.h2o.client.language", "python")h2o_context = H2OContext()# Create backing H2OContexth2o_context._jvm = jvmpackage = getattr(jvm.ai.h2o.sparkling, "H2OContext$")module = package.__getattr__("MODULE$")jhc = module.getOrCreate(selected_conf._jconf)h2o_context._jhc = jhch2o_context._conf = selected_confh2o_context._client_ip = jhc.h2oLocalClientIp()h2o_context._client_port = jhc.h2oLocalClientPort()# Create H2O REST API clientif not h2o_context.__isClientConnected() or not H2OContext.__isConnected:h2o_context.__h2o_connect()H2OContext.__isConnected = Trueh2o_context.__setClientConnected()print(h2o_context)return h2o_contextdef __isStopped(self):hc = self._jhcfield = hc.getClass().getDeclaredField("stopped")field.setAccessible(True)return field.get(hc)def __isClientConnected(self):field = self.__getClientConnectedField()return field.get(self._jhc)def __setClientConnected(self):field = self.__getClientConnectedField()field.set(self._jhc, True)def __getClientConnectedField(self):field = self._jhc.getClass().getDeclaredField("clientConnected")field.setAccessible(True)return fielddef stop(self, stopSparkContext=False):h2o.connection().close()scalaStopMethod = getattr(self._jhc, "ai$h2o$sparkling$H2OContext$$stop")scalaStopMethod(stopSparkContext, False, False)  # stopSpark = False, stopJVM = False, inShutdownHook = Falsedef downloadH2OLogs(self, destination, container="ZIP"):assert_is_type(container, Enum("ZIP", "LOG"))return self._jhc.downloadH2OLogs(destination, container)def __str__(self):if self.__isClientConnected() and not self.__isStopped():return self._jhc.toString()else:return "H2OContext has been stopped or hasn't been created. Call H2OContext.getOrCreate() or " \"H2OContext.getOrCreate(conf) to create a new one."def __repr__(self):self.show()return ""def show(self):print(self)def getConf(self):return self._confdef setH2OLogLevel(self, level):self._jhc.setH2OLogLevel(level)def getH2OLogLevel(self):return self._jhc.getH2OLogLevel()def importHiveTable(self, database="default", table=None, partitions=None, allowMultiFormat=False):return h2o.import_hive_table(database, table, partitions, allowMultiFormat)def asSparkFrame(self, h2oFrame, copyMetadata=True):"""Transforms given H2OFrame to Spark DataFrameParameters----------h2oFrame : H2OFramecopyMetadata: Bool = TrueReturns-------Spark DataFrame"""if isinstance(h2oFrame, H2OFrame):frame_id = h2oFrame.frame_idjdf = self._jhc.asSparkFrame(frame_id, copyMetadata)sqlContext = SparkSession.builder.getOrCreate()._wrappeddf = DataFrame(jdf, sqlContext)# Attach h2o_frame to dataframe which forces python not to delete the frame when we leave the scope of this# method.# Without this, after leaving this method python would garbage collect the frame since it's not used# anywhere and spark. when executing any action on this dataframe, will fail since the frame# would be missing.df._h2o_frame = h2oFramereturn dfdef asH2OFrame(self, sparkFrame, h2oFrameName=None, fullCols=-1):"""Transforms given Spark RDD or DataFrame to H2OFrame.Parameters----------sparkFrame : Spark RDD or DataFrameh2oFrameName : Optional name for resulting H2OFramefullCols : number of first n columns which are sent to the client together with the dataReturns-------H2OFrame which contains data of original input Spark data structure"""assert_is_type(sparkFrame, DataFrame, RDD)df = H2OContext.__prepareSparkDataForConversion(self._jvm, sparkFrame)if h2oFrameName is None:key = self._jhc.asH2OFrame(df._jdf).frameId()else:key = self._jhc.asH2OFrame(df._jdf, h2oFrameName).frameId()return H2OFrame.get_frame(key, full_cols=fullCols, light=True)@staticmethoddef __prepareSparkDataForConversion(_jvm, sparkData):if isinstance(sparkData, DataFrame):return sparkDataelif sparkData.isEmpty():return sparkData.toDF()else:session = SparkSession.builder.getOrCreate()first = sparkData.first()if isinstance(first, (str, bool, numbers.Integral, float)):if isinstance(first, str):return session.createDataFrame(sparkData, StringType())elif isinstance(first, bool):return session.createDataFrame(sparkData, BooleanType())elif (isinstance(sparkData.min(), numbers.Integral) and isinstance(sparkData.max(), numbers.Integral)):if sparkData.min() >= _jvm.Integer.MIN_VALUE and sparkData.max() <= _jvm.Integer.MAX_VALUE:return session.createDataFrame(sparkData, IntegerType())elif sparkData.min() >= _jvm.Long.MIN_VALUE and sparkData.max() <= _jvm.Long.MAX_VALUE:return session.createDataFrame(sparkData, LongType())else:warnings.warn("Maximal or minimal number in RDD is too big to convert to Java. Treating numbers as strings.")return session.createDataFrame(sparkData, StringType())elif isinstance(first, float):## Spark would fail when creating data frame if there is int type in RDD[Float]## Convert explicitly all to floatreturn session.createDataFrame(sparkData.map(lambda x: float(x)), FloatType())else:raise ValueError('Unreachable code')else:return session.createDataFrame(sparkData)

上面的安装,我使用,windows 子系统,还以为 文件可以在系统上找到,结果发现,wsl 现在直接使用了一个虚拟磁盘文件:
wsl 路径:C:\Users\wangyny\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu20.04onWindows_79rhkp1fndgsc\LocalState
在这里插入图片描述

典型应用场景

Model Building

A typical example involves multiple data transformations with the help of Spark
API, where a final form of data is transformed into an H2O frame and passed
to an H2O algorithm. The constructed model estimates different metrics based

on the testing data or gives a prediction that can be used in the rest of the
data pipeline (see Figure 1).
在这里插入图片描述

Data Munging

Another use-case includes Sparkling Water as a provider of ad-hoc data transformations. Figure 2 shows a data pipeline benefiting from H2O’s parallel
data load and parse capabilities, while Spark API is used as another provider
of data transformations. Furthermore, H2O can be used as an in-place data
transformer.

Stream Processing

The last use-case depicted in Figure 3 introduces two data pipelines. The first
one, called an off-line training pipeline, is invoked regularly (e.g., every hour or
every day), and utilizes both Spark and H2O API. The off-line pipeline provides
an H2O model as output. The H2O API allows the model to be exported in a
form independent on H2O run-time. The second pipeline processes streaming
data (with help of Spark Streaming or Storm) and utilizes the model trained in
the first pipeline to score the incoming data. Since the model is exported with

在这里插入图片描述
no run-time dependency on H2O, the streaming pipeline can be lightweight and
independent on H2O or Sparkling Water infrastructure.
 Sparkling Water used as an off-line model producer feeding models into a stream-based data pipeline
Sparkling Water used as an off-line model producer feeding models
into a stream-based data pipeline.



参考文档

H2O Sparkling Water

H2O Sparkling Water documentation

  • https://databricks.com/blog/2014/06/30/sparkling-water-h20-spark.html

PySparkling


PySparkling

  • https://github.com/MachineLP/CodeFun/blob/master/05-auto_ml_dl/01-auto_ml/01-%E8%87%AA%E5%8A%A8%E5%8C%96%E7%89%B9%E5%BE%81%E5%B7%A5%E7%A8%8B.md

  • https://bbs.huaweicloud.com/blogs/134113

  • 可视化与拖拽式建模

这篇关于autoML 前瞻与实践 ---- H2O Sparkling Water简介的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/173828

相关文章

Spring Security简介、使用与最佳实践

《SpringSecurity简介、使用与最佳实践》SpringSecurity是一个能够为基于Spring的企业应用系统提供声明式的安全访问控制解决方案的安全框架,本文给大家介绍SpringSec... 目录一、如何理解 Spring Security?—— 核心思想二、如何在 Java 项目中使用?——

防止Linux rm命令误操作的多场景防护方案与实践

《防止Linuxrm命令误操作的多场景防护方案与实践》在Linux系统中,rm命令是删除文件和目录的高效工具,但一旦误操作,如执行rm-rf/或rm-rf/*,极易导致系统数据灾难,本文针对不同场景... 目录引言理解 rm 命令及误操作风险rm 命令基础常见误操作案例防护方案使用 rm编程 别名及安全删除

C++统计函数执行时间的最佳实践

《C++统计函数执行时间的最佳实践》在软件开发过程中,性能分析是优化程序的重要环节,了解函数的执行时间分布对于识别性能瓶颈至关重要,本文将分享一个C++函数执行时间统计工具,希望对大家有所帮助... 目录前言工具特性核心设计1. 数据结构设计2. 单例模式管理器3. RAII自动计时使用方法基本用法高级用法

PHP应用中处理限流和API节流的最佳实践

《PHP应用中处理限流和API节流的最佳实践》限流和API节流对于确保Web应用程序的可靠性、安全性和可扩展性至关重要,本文将详细介绍PHP应用中处理限流和API节流的最佳实践,下面就来和小编一起学习... 目录限流的重要性在 php 中实施限流的最佳实践使用集中式存储进行状态管理(如 Redis)采用滑动

ShardingProxy读写分离之原理、配置与实践过程

《ShardingProxy读写分离之原理、配置与实践过程》ShardingProxy是ApacheShardingSphere的数据库中间件,通过三层架构实现读写分离,解决高并发场景下数据库性能瓶... 目录一、ShardingProxy技术定位与读写分离核心价值1.1 技术定位1.2 读写分离核心价值二

深入浅出Spring中的@Autowired自动注入的工作原理及实践应用

《深入浅出Spring中的@Autowired自动注入的工作原理及实践应用》在Spring框架的学习旅程中,@Autowired无疑是一个高频出现却又让初学者头疼的注解,它看似简单,却蕴含着Sprin... 目录深入浅出Spring中的@Autowired:自动注入的奥秘什么是依赖注入?@Autowired

MySQL分库分表的实践示例

《MySQL分库分表的实践示例》MySQL分库分表适用于数据量大或并发压力高的场景,核心技术包括水平/垂直分片和分库,需应对分布式事务、跨库查询等挑战,通过中间件和解决方案实现,最佳实践为合理策略、备... 目录一、分库分表的触发条件1.1 数据量阈值1.2 并发压力二、分库分表的核心技术模块2.1 水平分

Java Stream 并行流简介、使用与注意事项小结

《JavaStream并行流简介、使用与注意事项小结》Java8并行流基于StreamAPI,利用多核CPU提升计算密集型任务效率,但需注意线程安全、顺序不确定及线程池管理,可通过自定义线程池与C... 目录1. 并行流简介​特点:​2. 并行流的简单使用​示例:并行流的基本使用​3. 配合自定义线程池​示

SpringBoot通过main方法启动web项目实践

《SpringBoot通过main方法启动web项目实践》SpringBoot通过SpringApplication.run()启动Web项目,自动推断应用类型,加载初始化器与监听器,配置Spring... 目录1. 启动入口:SpringApplication.run()2. SpringApplicat

PostgreSQL简介及实战应用

《PostgreSQL简介及实战应用》PostgreSQL是一种功能强大的开源关系型数据库管理系统,以其稳定性、高性能、扩展性和复杂查询能力在众多项目中得到广泛应用,本文将从基础概念讲起,逐步深入到高... 目录前言1. PostgreSQL基础1.1 PostgreSQL简介1.2 基础语法1.3 数据库