Ambari2.7.4自定义服务集成Flink1.9.2
ambari支持自定义服务组件集成,以下介绍ambari2.7.4集成flink1.9.2版本组件。
将flink自定义服务脚本拷贝到ambari-server节点指定目录/var/lib/ambari-server/resources/stacks/HDP/3.1/services
[unicom@nn71 services]$ pwd
/var/lib/ambari-server/resources/stacks/HDP/3.1/services
[unicom@nn71 services]$ tar zxvf /home/unicom/FLINK1.9.2.tar.gz -C .
FLINK/
FLINK/configuration/
FLINK/configuration/flink-ambari-config.xml
FLINK/configuration/flink-env.xml
FLINK/kerberos.json
FLINK/package/
FLINK/package/.hash
FLINK/package/archive.zip
FLINK/package/scripts/
FLINK/package/scripts/flink.py
FLINK/package/scripts/params.py
FLINK/package/scripts/status_params.py
FLINK/README.md
FLINK/screenshots/
FLINK/screenshots/Flink-UI-1.png
FLINK/screenshots/Flink-UI-2.png
FLINK/screenshots/Flink-UI-3.png
FLINK/screenshots/Flink-wordcount.png
FLINK/screenshots/Install-wizard.png
FLINK/screenshots/Installed-service-config.png
FLINK/screenshots/Installed-service-stop.png
FLINK/screenshots/YARN-UI.png
FLINK/metainfo.xml
[unicom@nn71 services]$ ll
total 0
drwxr-xr-x 3 unicom root 47 Mar 13 11:18 ATLAS
drwxrwxr-x 5 unicom unicom 119 Mar 5 22:27 FLINK
drwxr-xr-x 2 unicom root 26 Mar 13 11:18 HBASE
drwxr-xr-x 3 unicom root 127 Mar 13 11:18 HDFS
drwxr-xr-x 2 unicom root 106 Mar 13 11:18 HIVE
drwxr-xr-x 2 unicom root 26 Mar 13 11:18 KAFKA
drwxr-xr-x 2 unicom root 26 Mar 13 11:18 KNOX
drwxr-xr-x 2 unicom root 26 Mar 13 11:18 PIG
drwxr-xr-x 3 unicom root 127 Mar 13 11:18 RANGER
drwxr-xr-x 4 unicom root 61 Mar 13 11:18 RANGER_KMS
drwxr-xr-x 2 unicom root 26 Mar 13 11:18 TEZ
drwxr-xr-x 2 unicom root 26 Mar 13 11:18 YARN
drwxr-xr-x 2 unicom root 26 Mar 13 11:18 ZOOKEEPER
如需修改flink的显式版本,可修改metainfo.xml
http://10.172.54.71/ambari/HDP/centos7/3.1.4.0-315/flink/flink-1.9.2-bin-scala_2.12.tgz
修改安装包中flink的安装包路径
/var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration
[unicom@nn71 configuration]$ ll
total 8
-rwxr-xr-x 1 unicom unicom 2183 Mar 25 21:15 flink-ambari-config.xml
-rwxr-xr-x 1 unicom unicom 3678 Mar 5 22:27 flink-env.xml
在安装节点配置环境变量,并生效。HADOOP_CLASSPATH=`hadoop classpath`
[root@nn71 flink]# vim /etc/profile
export FLINK_HOME=/usr/hdp/3.1.4.0-315/flink
export PATH=$PATH:$FLINK_HOME/bin
export HADOOP_CLASSPATH=`hadoop classpath`
:wq!
[root@nn71 flink]# source /etc/profile
重启ambari-server,即可在ambari主界面找到相应版本的flink
[unicom@nn71 flink]$ sudo ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start..................
Server started listening on 8080
DB configs consistency check found warnings. See /var/log/ambari-server/ambari-server-check-database.log for more details.
[unicom@nn71 flink]$
增加服务
自定义服务中校对flink安装路径及修改其他路径
报错:KeyError: u'flink'
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 38, in
BeforeAnyHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 31, in hook
setup_users()
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 50, in setup_users
groups = params.user_to_groups_dict[user],
KeyError: u'flink'
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-2302.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-2302.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
原因:通过ambari添加自定义服务时, 总是不能自动增加service账号。
解决方案:修改HDP参数 ignore_groupsusers_create
[unicom@nn71 scripts]$ pwd
/var/lib/ambari-server/resources/scripts
[unicom@nn71 scripts]$ python configs.py --help
Usage: configs.py [options]
Options:
-h, --help show this help message and exit
-t PORT, --port=PORT Optional port number for Ambari server. Default is
'8080'. Provide empty string to not use port.
-s PROTOCOL, --protocol=PROTOCOL
Optional support of SSL. Default protocol is 'http'
--unsafe Skip SSL certificate verification.
-a ACTION, --action=ACTION
Script action: <get>, <set>, <delete>
-l HOST, --host=HOST Server external host name
-n CLUSTER, --cluster=CLUSTER
Name given to cluster. Ex: 'c1'
-c CONFIG_TYPE, --config-type=CONFIG_TYPE
One of the various configuration types in Ambari. Ex:
core-site, hdfs-site, mapred-queue-acls, etc.
-b VERSION_NOTE, --version-note=VERSION_NOTE
Version change notes which will help to know what has
been changed in this config. This value is optional
and is used for actions <set> and <delete>.
To specify credentials please use "-e" OR "-u" and "-p'":
-u USER, --user=USER
Optional user ID to use for authentication. Default is
'admin'
-p PASSWORD, --password=PASSWORD
Optional password to use for authentication. Default
is 'admin'
-e CREDENTIALS_FILE, --credentials-file=CREDENTIALS_FILE
Optional file with user credentials separated by new
line.
To specify property(s) please use "-f" OR "-k" and "-v'":
-f FILE, --file=FILE
File where entire configurations are saved to, or read
from. Supported extensions (.xml, .json>)
-k KEY, --key=KEY Key that has to be set or deleted. Not necessary for
'get' action.
-v VALUE, --value=VALUE
Optional value to be set. Not necessary for 'get' or
'delete' actions.
# 查看ignore_groupsusers_create
#python configs.py -u admin -p admin -n $cluster_name -l $ambari_server -t 8080 -a get -c cluster-env |grep -i ignore_groupsusers_create
python configs.py -u admin -p admin -n HDP71 -l nn71 -t 8080 -a get -c cluster-env |grep -i ignore_groupsusers_create
修改ignore_groupsusers_create属性值
# python configs.py -u admin -p admin -n $cluster_name -l $ambari_server -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v ture
python configs.py -u admin -p admin -n HDP71 -l nn71 -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v ture
ambari安装flink重试,仍然报错:
怀疑无法自动创建用户,查看flink用户组已创建,用户未创建,遂手动增加用户。
flink用户组已自动创建
[root@nn71 flink]# vim /etc/group
flink用户未创建
[root@nn71 flink]# vim /etc/passwd
#useradd -d /home/flink -g flink flink
[root@nn71 flink]# useradd -d /home/flink -g flink flink
[root@nn71 flink]#
重试,安装完毕启动告警
查看错误信息:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 163, in <module>
Master().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 110, in start
Execute (cmd + format(" >> {flink_log_file}"), user=params.flink_user)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HADOOP_CONF_DIR=/etc/hadoop/conf; /usr/hdp/3.1.4.0-315/flink/bin/yarn-session.sh -n 1 -s 1 -jm 768 -tm 1024 -qu default -nm flinkapp-from-ambari -d >> /data/var/log/flink/flink-setup.log' returned 1. Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/exceptions/YarnException
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.exceptions.YarnException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
安装节点,flink用户,手动启动flink
[flink3.1.4.0-315/flink/bin/yarn-session.sh -n 1 -s 1 -jm 768 -tm 1024 -qu default -nm flinkapp-from-ambari -d >> /data/var/log/flink/flink-setup.log flink]$ /usr/hdp/
flink server 节点,flink用户,在pid配置路径下创建flink.pid文件,将yarn页面上flink对应的 application_id写入。
保存后,ambari页面启动flink,正常。