在hadoop-0.20.2/conf/hadoop-env.sh 中添加

1
2
export HBASE_HOME=/home/miao/hbase/
export HADOOP_CLASSPATH=$HBASE_HOME/hbase- 0.90 . 0 .jar:$HBASE_HOME:$HBASE_HOME/lib/zookeeper- 3.3 . 2 .jar:$HBASE_HOME/conf

2、导出

输入命令:hadoop-0.20.2/bin/hadoop jar /home/rain/hbase/hbase-0.90.6/hbase-0.90.6.jar export <表名>  <路径>

例如:

1
hadoop- 0.20 . 2 /bin/hadoop jar /home/miao/hbase/hbase- 0.90 . 6 /hbase- 0.90 . 6 .jar export tablename /home/miao/tableout

从hdfs拷出来

1
hadoop- 0.20 . 2 /bin/hadoop -copyToLocal /home/miao/tableout /home/miao/tableout

查看/home/rain/tableout里是否有part-m-000000,这个就是这个表导出的内容


3、导入

把part-m-000000导入hdfs

输入命令:hadoop-0.20.2/bin/hadoop -copyFromLocal <数据所在本地路径>  <所要拷贝到的hdfs路径>

例如:

1
hadoop- 0.20 . 2 /bin/hadoop -copyFromLocal /home/miao/data  /user/hadoop/

从hdfs导入到hbase

输入命令:hadoop-0.20.2/bin/hadoop jar hbase0.90.6/hbase-0.90.6.jar 表名 hdfs路径         ##表需要提前建好,否则会出错

1
hadoop- 0.20 . 2 /bin/hadoop jar hbase0. 90.6 /hbase- 0.90 . 6 .jar tablename /user/hadoop


方案2:


1、导出:

把hbase中某个表的数据导出到hadoop中

1
hbase org.apache.hadoop.hbase.mapreduce.Driver  export  <表名> <数据文件的位置>

hbase中首先要有这个表才能导出成功。

2、导入:

把hbase中的表导入到另外一台机器的hbase中时,需要把导出的数据先put到到另一台机器的hadoop中,假设put的路径也是:
/user/miao/tablename/
而且,另一台机器的hbase中需要建立相同的表格,然后从hadoop中导入数据到hbase:

1
hbase org.apache.hadoop.hbase.mapreduce.Driver  import  <表名> <数据文件的位置>



-------------------------------------------------------------
先大概表述出现的问题, 
当使用两步的方式导入数据时, 
第一步,生成hfile 
Script代码   收藏代码
  1. hadoop jar hbase-version.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,c1,c2 -Dimporttsv.bulk.output=tmp hbase_table hdfs_file  

这一步提醒两个地方,c1,c2列是需要指明列族和列名,例如:cf:1,cf:2, 
                    -Dimporttsv.bulk.output=tmp, tmp生成后会在hdfs的/user/hadoop/tmp中 
第二步,导入hbase表,这一步其实是mv的过程,即利用生成好的hfile移到hbase中 
Script代码   收藏代码
  1. hadoop jar hbase-version.jar completebulkload /user/hadoop/tmp/cf hbase_table  

这样数据是可以正常进入hbase的 

当不需要生成临时文件,直接bulk load时, 
Script代码   收藏代码
  1. hadoop jar hbase-version.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,c1,c2 hbase_table hdfs_file  


值得注意的是,经过测试,在数据量较大时,两步处理方式比一步导入方式有效率和稳定性上的优势: 
1. 两步导入时,由于是先生成文件,再复制文件进入HBase表,对于HBase读取的性能影响较小,不会出现较大的读取时间波动; 
2. 两步导入时,时间上要节约三分之二,等于效率提高60%~70%,map的处理时间较短,只有一个reduce, 一步导入时,map处理时间较长,没有reduce. 由于hadoop集群在配置时,一般会有较多的最大map数设置,较少的最大reduce数设置,那么这种两步导入方式也在一定程度上提升了hadoop集群并发能力。
 




-----------------------------------------------------------------

16.1.9. Export

Export is a utility that will dump the contents of table to HDFS in a sequence file. Invoke via:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]

Note: caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration.

16.1.10. Import

Import is a utility that will load data that has been exported back into HBase. Invoke via:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>

To import 0.94 exported files in a 0.96 cluster or onwards, you need to set system property "hbase.import.version" when running the import command as below:

$ bin/hbase -Dhbase.import.version=0.94 org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>

16.1.11. ImportTsv

ImportTsv is a utility that will load data in TSV format into HBase. It has two distinct usages: loading data from TSV format in HDFS into HBase via Puts, and preparing StoreFiles to be loaded via the completebulkload.

To load data via Puts (i.e., non-bulk loading):

$ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c <tablename> <hdfs-inputdir>

To generate StoreFiles for bulk-loading:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c -Dimporttsv.bulk.output=hdfs://storefile-outputdir <tablename> <hdfs-data-inputdir>

These generated StoreFiles can be loaded into HBase via Section 16.1.12, “CompleteBulkLoad”.

16.1.11.1. ImportTsv Options

Running ImportTsv with no arguments prints brief usage information:

Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir>

Imports the given input directory of TSV data into the specified table.

The column names of the TSV data must be specified using the -Dimporttsv.columns
option. This option takes the form of comma-separated column names, where each
column name is either a simple column family, or a columnfamily:qualifier. The special
column name HBASE_ROW_KEY is used to designate that this column should be used
as the row key for each imported record. You must specify exactly one column
to be the row key, and you must specify a column name for every column that exists in the
input data.

By default importtsv will load data directly into HBase. To instead generate
HFiles of data to prepare for a bulk data load, pass the option:
  -Dimporttsv.bulk.output=/path/for/output
  Note: the target table will be created with default column family descriptors if it does not already exist.

Other options that may be specified with -D include:
  -Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line
  '-Dimporttsv.separator=|' - eg separate on pipes instead of tabs
  -Dimporttsv.timestamp=currentTimeAsLong - use the specified timestamp for the import
  -Dimporttsv.mapper.class=my.Mapper - A user-defined Mapper to use instead of org.apache.hadoop.hbase.mapreduce.TsvImporterMapper
        
16.1.11.2. ImportTsv Example

For example, assume that we are loading data into a table called 'datatsv' with a ColumnFamily called 'd' with two columns "c1" and "c2".

Assume that an input file exists as follows:

row1	c1	c2
row2	c1	c2
row3	c1	c2
row4	c1	c2
row5	c1	c2
row6	c1	c2
row7	c1	c2
row8	c1	c2
row9	c1	c2
row10	c1	c2
          

For ImportTsv to use this imput file, the command line needs to look like this:

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile
 

... and in this example the first column is the rowkey, which is why the HBASE_ROW_KEY is used. The second and third columns in the file will be imported as "d:c1" and "d:c2", respectively.

16.1.11.3. ImportTsv Warning

If you have preparing a lot of data for bulk loading, make sure the target HBase table is pre-split appropriately.


http://hbase.apache.org/book/ops_mgt.html#export








Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐