单机模式

	解压后直接启动,不需要修改配置文件
		./bin/start-local.sh

	开启监听端口
		nc -l 9000

	将代码上传后运行(代码中的依赖改为 <scope>provided</scope>)
		./bin/flink run XX.jar --port 9000

	访问主节点UI网页 --> node1:8081

	关闭任务,先获取 job id
		./bin/flink list
		./bin/flink cancel [jobid]

		也可以通过页面上cancel job

	通过./bin/flink -h查看命令

standalone模式(需要搭建集群,node1节点的flink拷贝到其他节点)

	修改配置文件
		vi flink-conf.yaml(注意yaml文件格式)
			jobmanager.rpc.address: localhost -->   jobmanager.rpc.address: node1
		vi slaves(填写从节点地址)
			node2
			node3

	启动集群
		./start-cluster.sh

	执行代码
		./bin/flink run XX.jar --port 9000

	通过网页Running Jobs 查看job执行的位置,在相应的机器下
		tail -10f flink-root-taskmanager-0-node3.out

on yarn模式(利用yarn搭建集群)

	vi yarm-site.xml
		#关闭内存检测flink,防止启动报错
		<property>
			<name>yarn.nodemanager.vmem-check-enabled</name>
			<value>false</value>
		</property>

	开启hdfs和yarn集群

	**********第一种:在yarn中初始化一个flink集群,开辟指定的资源*****************
		开启flink集群
			./bin/yarn-session.sh -n 2 -jm 700 -tm 700
			***启动成功***
			Number of connected TaskManagers changed to 1. Slots available: 1
			Number of connected TaskManagers changed to 2. Slots available: 2

		执行任务
			./bin/flink run flink-1.0-SNAPSHOT-jar-with-dependencies.jar --port 9009

		访问UI
			通过yarn UI界面找到RUNNING任务,点击任务上的ApplicationMasterTask进入到flink UI界面
			通过flink UI界面的Running Jobs找到任务运行在哪台节点上,在Task Managers点开相应的节点,点击Stdout可以观察到输出

		结束开辟的集群资源
			yarn application -kill application_1538559819716_0004
	各种报警
		in safe mode
			启动hadoop后等待hdfs退出安全模式后再启动flink集群
		Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink YARN Client needs one of these to be set to properly load the Hadoop configuration for accessing YARN.
			需要在环境变量中配置三者之一HADOOP_CONF_DIR YARN_CONF_DIR HADOOP_HOME
			export HADOOP_HOME=/export/server/hadoop-2.7.4
			export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
			export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
			export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
		第一次启动报错--内存不足
			./bin/yarn-session.sh -n 2 -jm 1024 -tm 1024
			is running beyond virtual memory limits. Current usage: 254.0 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
		第二次启动报错--配置内存过少
			./bin/yarn-session.sh -n 2 -jm 512 -tm 512
			The configuration value 'containerized.heap-cutoff-min' is higher (600) than the requested amount of memory 512

	*********第二种:每次连接创建一个新的flink集群,任务之间不会影响(五颗星推荐)****
		启动命令
			./bin/flink run -m yarn-cluster -yn 2 -yjm 700 -ytm 700 flink-1.0-SNAPSHOT-jar-with-dependencies.jar --port 9009
10-03 20:55