首页
学习
活动
专区
圈层
工具
发布
    • 综合排序
    • 最热优先
    • 最新优先
    时间不限
  • 来自专栏个人积累

    'python之excel读写报表统计入门'

    ------- PS D:\python-workplace> & D:/python/python.exe d:/python-workplace/excel-demo1.py Pie apple balana PS D:\python-workplace> & D:/python/python.exe d:/python-workplace/excel-demo1.py Pie sold apple 50 balana PS D:\python-workplace> & D:/python/python.exe d:/python-workplace/excel-demo1.py Pie sold apple 50 balana CharacterProperties ) ws = wb.create_sheet('AreaChart') data = [ ['Pie', 'sold'], ['apple',50], ['balana

    1.3K20发布于 2021-02-26
  • 来自专栏算法之名

    提交第一个Spark统计文件单词数程序,配合hadoop hdfs

    在linux系统中,我们随便写一个文件,假设我们命名为a.txt,内容也随便写几个单词 ice park dog fish dinsh cark balana apple fuck fool my him /hdfs dfs -cat /usr/file/a.txt ice park dog fish dinsh cark balana apple fuck fool my him cry 此时我们也把我们需要的 /hdfs dfs -cat /usr/file/wcount/part-00001 (ice,1) (cark,1) (balana,1) (fuck,1) 这样我们就得到了我们需要的结果,文本文件

    84340发布于 2019-08-20
  • 来自专栏算法之名

    Spark RDD篇

    ), (cry,CompactBuffer(1)), (my,CompactBuffer(1)), (ice,CompactBuffer(1)), (cark,CompactBuffer(1)), (balana , (park,1), (fool,1), (dinsh,1), (fish,1), (dog,1), (apple,1), (cry,1), (my,1), (ice,1), (cark,1), (balana

    1.1K10发布于 2019-08-20
领券