Flink入门及实战-上:

http://edu.51cto.com/sd/07245

Flink入门及实战-下:

http://edu.51cto.com/sd/5845e

  • 下载启动flink   
  • 查看代码
  • 运行例子
  • 下一步

下载启动flink

flink可以在Linux, Mac OS X, 和Windows平台上运行。为了运行flink,只需要安装JAVA7.x(或者更高版本)。windows用户,请点击此链接查看相关文档

你可以使用下面命令检查安装的java版本

 java -version

如果你已经安装了java8,你将会看到下面的数据。

java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

 

下面以在linux上安装为例(mac上安装也可以参考这个):

 

  1. 点此链接下载flink安装包。你可以选择任何hadoop/scala的组合。如果你计划使用本地文件系统(安装本地集群),那么你选择任何hadoop版本对应的flink都可以。如果是生产环境,那么建议根据你集群上的hadoop版本选择对应的flink版本。
  2. 进入文件的下载目录
  3. 解压文件
$ cd ~/Downloads        # 进入文件的下载目录
$ tar xzf flink-*.tgz   # 解压下载的压缩包
$ cd flink-1.4.1

安装本地flink集群

./bin/start-local.sh  # 启动 Flink 集群

在浏览器输入此链接查看flink集群信息 http://localhost:8081

你也可以在log日志目录中检查系统运行情况

$ tail log/flink-*-jobmanager-*.log
INFO ... - Starting JobManager
INFO ... - Starting JobManager web frontend
INFO ... - Web frontend listening at 127.0.0.1:8081
INFO ... - Registered TaskManager at 127.0.0.1 (akka://flink/user/taskmanager)

 

查看代码

你可以在github上发现SocketWindowWordCount 编译好的javascala源码

scala代码

object SocketWindowWordCount {

    def main(args: Array[String]) : Unit = {

        // port 表示需要连接的端口
        val port: Int = try {
            ParameterTool.fromArgs(args).getInt("port")
        } catch {
            case e: Exception => {
                System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'")
                return
            }
        }

        // 获取运行环境
        val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

        // 连接此socket获取输入数据
        val text = env.socketTextStream("localhost", port, '\n')

        // 解析数据, 分组, 窗口化, 并且聚合求SUM
        import org.apache.flink.api.scala._ //需要加上这一行隐式转换 否则在调用flatmap方法的时候会报错
        val windowCounts = text
            .flatMap { w => w.split("\\s") }
            .map { w => WordWithCount(w, 1) }
            .keyBy("word")
            .timeWindow(Time.seconds(5), Time.seconds(1))
            .sum("count")

        // 使用一个单线程打印结果
        windowCounts.print().setParallelism(1)

        env.execute("Socket Window WordCount")
    }

    // 定义一个数据类型保存单词出现的次数
    case class WordWithCount(word: String, count: Long)
}

java代码

public class SocketWindowWordCount {

    public static void main(String[] args) throws Exception {

        // port 表示需要连接的端口
        final int port;
        try {
            final ParameterTool params = ParameterTool.fromArgs(args);
            port = params.getInt("port");
        } catch (Exception e) {
            System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'");
            return;
        }

        // 获取运行环境
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // 连接此socket获取输入数据
        DataStream<String> text = env.socketTextStream("localhost", port, "\n");

        // 解析数据, 分组, 窗口化, 并且聚合求SUM
        DataStream<WordWithCount> windowCounts = text
            .flatMap(new FlatMapFunction<String, WordWithCount>() {
                @Override
                public void flatMap(String value, Collector<WordWithCount> out) {
                    for (String word : value.split("\\s")) {
                        out.collect(new WordWithCount(word, 1L));
                    }
                }
            })
            .keyBy("word")
            .timeWindow(Time.seconds(5), Time.seconds(1))
            .reduce(new ReduceFunction<WordWithCount>() {
                @Override
                public WordWithCount reduce(WordWithCount a, WordWithCount b) {
                    return new WordWithCount(a.word, a.count + b.count);
                }
            });

        // 使用一个单线程打印结果
        windowCounts.print().setParallelism(1);

        env.execute("Socket Window WordCount");
    }

    // 定义一个数据类型保存单词出现的次数
    public static class WordWithCount {

        public String word;
        public long count;

        public WordWithCount() {}

        public WordWithCount(String word, long count) {
            this.word = word;
            this.count = count;
        }

        @Override
        public String toString() {
            return word + " : " + count;
        }
    }
}

 

运行这个例子

现在,我们将要运行这个flink例子。它将会从socket获取数据,并且每隔5秒打印一次计算的单词出现的次数。

 

  • 首先,我们使用netcat启动一个本地socket
$ nc -l 9000
  • 提交flink程序
    $ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
    
    Cluster configuration: Standalone cluster with JobManager at /127.0.0.1:6123
    Using address 127.0.0.1:6123 to connect to JobManager.
    JobManager web interface address http://127.0.0.1:8081
    Starting execution of program
    Submitting job with JobID: 574a10c8debda3dccd0c78a3bde55e1b. Waiting for job completion.
    Connected to JobManager at Actor[akka.tcp://flink@127.0.0.1:6123/user/jobmanager#297388688]
    11/04/2016 14:04:50     Job execution switched to status RUNNING.
    11/04/2016 14:04:50     Source: Socket Stream -> Flat Map(1/1) switched to SCHEDULED
    11/04/2016 14:04:50     Source: Socket Stream -> Flat Map(1/1) switched to DEPLOYING
    11/04/2016 14:04:50     Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to SCHEDULED
    11/04/2016 14:04:51     Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to DEPLOYING
    11/04/2016 14:04:51     Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to RUNNING
    11/04/2016 14:04:51     Source: Socket Stream -> Flat Map(1/1) switched to RUNNING
    这个程序连接到socket,然后等待数据。你可以通过webui界面查看job的运行情况

 

 

 

  • 每5秒计算一次单词,并且打印到控制台。监控taskmanager的日志文件输出,并且在nc控制台输入一些内容,每一行输入完成以后需要输入回车。
$ nc -l 9000
lorem ipsum
ipsum ipsum ipsum
bye

这个.out文件将会打印出来在指定时间内单词出现的次数

$ tail -f log/flink-*-taskmanager-*.out
lorem : 1
bye : 1
ipsum : 4

实验结束,停止flink。

$ ./bin/stop-local.sh

 

下一步

查看更多例子来熟悉flink程序的api。当你已经做完这些的时候,继续读下面的流处理指南

 

 

 

获取更多大数据资料,视频以及技术交流请加群:

 

 

 

Logo

大数据从业者之家,一起探索大数据的无限可能!

更多推荐