HBase编程 API入门系列之HTable pool(6)

本文涉及的产品
云数据库 RDS MySQL Serverless,0.5-2RCU 50GB
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
简介:

 HTable是一个比较重的对此,比如加载配置文件,连接ZK,查询meta表等等,高并发的时候影响系统的性能,因此引入了“池”的概念。

 

  引入“HBase里的连接池”的目的是: 

                   为了更高的,提高程序的并发和访问速度。

 

 

 

  从“池”里去拿,拿完之后,放“池”即可。

复制代码
 1 package zhouls.bigdata.HbaseProject.Pool;
 2 
 3 import java.io.IOException;
 4 import java.util.concurrent.ExecutorService;
 5 import java.util.concurrent.Executors;
 6 
 7 import org.apache.hadoop.conf.Configuration;
 8 import org.apache.hadoop.hbase.HBaseConfiguration;
 9 import org.apache.hadoop.hbase.client.HConnection;
10 import org.apache.hadoop.hbase.client.HConnectionManager;
11 
12 
13 public class TableConnection {
14     private TableConnection(){
15 }
16     private static HConnection connection = null;
17 public static HConnection getConnection(){
18     if(connection == null){
19         ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
20         Configuration conf = HBaseConfiguration.create();
21         conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
22         try{
23             connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
24         }catch (IOException e){
25         }
26     }
27     return connection;    
28     }
29 }
复制代码

 

 

 

 

 

 

 

 

 

 

  转到程序里,怎么来用这个“池”呢?

  即,TableConnection是公共的,新建好的“池”。可以一直作为模板啦。

 

 

 

 

1、引用“池”超过

HBase编程 API入门系列之put(客户端而言)(1)

  上面这种方式

 

 

 

 

复制代码
 1 package zhouls.bigdata.HbaseProject.Pool;
 2 
 3 import java.io.IOException;
 4 
 5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
 6 
 7 import javax.xml.transform.Result;
 8 
 9 import org.apache.hadoop.conf.Configuration;
10 import org.apache.hadoop.hbase.Cell;
11 import org.apache.hadoop.hbase.CellUtil;
12 import org.apache.hadoop.hbase.HBaseConfiguration;
13 import org.apache.hadoop.hbase.TableName;
14 import org.apache.hadoop.hbase.client.Delete;
15 import org.apache.hadoop.hbase.client.Get;
16 import org.apache.hadoop.hbase.client.HTable;
17 import org.apache.hadoop.hbase.client.HTableInterface;
18 import org.apache.hadoop.hbase.client.Put;
19 import org.apache.hadoop.hbase.client.ResultScanner;
20 import org.apache.hadoop.hbase.client.Scan;
21 import org.apache.hadoop.hbase.util.Bytes;
22 
23 public class HBaseTest {
24     public static void main(String[] args) throws Exception {
25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
29 //        table.put(put);
30 //        table.close();
31 
32 //        Get get = new Get(Bytes.toBytes("row_04"));
33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
35 //        System.out.println(rest.toString());
36 //        table.close();
37 
38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
41 //        table.delete(delete);
42 //        table.close();
43 
44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
47 //        table.delete(delete);
48 //        table.close();
49 
50 
51 //        Scan scan = new Scan();
52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
55 //        ResultScanner rst = table.getScanner(scan);//整个循环
56 //        System.out.println(rst.toString());
57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
59 //        System.out.println(next.toString());
60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
63 //        }
64 //        }
65 //    table.close();
66 
67         HBaseTest hbasetest =new HBaseTest();
68         hbasetest.insertValue();
69     }
70 
71     public void insertValue() throws Exception{
72         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
73         Put put = new Put(Bytes.toBytes("row_04"));//行键是row_01
74         put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("北京"));
75         table.put(put);
76         table.close();
77     }
78 
79 
80 
81     public static Configuration getConfig(){
82         Configuration configuration = new Configuration(); 
83 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
84         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
85         return configuration;
86     }
87 }
复制代码

 

 

 

 

 

 

 

 

 

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478096702098, value=Andy1 
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC 
4 row(s) in 0.5970 seconds

hbase(main):037:0>

 

 

 

 

 

 

 

 

 

复制代码
 1 package zhouls.bigdata.HbaseProject.Pool;
 2 
 3 import java.io.IOException;
 4 
 5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
 6 
 7 import javax.xml.transform.Result;
 8 
 9 import org.apache.hadoop.conf.Configuration;
10 import org.apache.hadoop.hbase.Cell;
11 import org.apache.hadoop.hbase.CellUtil;
12 import org.apache.hadoop.hbase.HBaseConfiguration;
13 import org.apache.hadoop.hbase.TableName;
14 import org.apache.hadoop.hbase.client.Delete;
15 import org.apache.hadoop.hbase.client.Get;
16 import org.apache.hadoop.hbase.client.HTable;
17 import org.apache.hadoop.hbase.client.HTableInterface;
18 import org.apache.hadoop.hbase.client.Put;
19 import org.apache.hadoop.hbase.client.ResultScanner;
20 import org.apache.hadoop.hbase.client.Scan;
21 import org.apache.hadoop.hbase.util.Bytes;
22 
23 public class HBaseTest {
24     public static void main(String[] args) throws Exception {
25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
29 //        table.put(put);
30 //        table.close();
31 
32 //        Get get = new Get(Bytes.toBytes("row_04"));
33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
35 //        System.out.println(rest.toString());
36 //        table.close();
37 
38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
41 //        table.delete(delete);
42 //        table.close();
43 
44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
47 //        table.delete(delete);
48 //        table.close();
49 
50 
51 //        Scan scan = new Scan();
52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
55 //        ResultScanner rst = table.getScanner(scan);//整个循环
56 //        System.out.println(rst.toString());
57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
59 //        System.out.println(next.toString());
60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
63 //        }
64 //        }
65 //        table.close();
66 
67         HBaseTest hbasetest =new HBaseTest();
68         hbasetest.insertValue();
69 }
70 
71     public void insertValue() throws Exception{
72         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
73         Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
74         put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
75         table.put(put);
76         table.close();
77     }
78 
79 
80 
81     public static Configuration getConfig(){
82         Configuration configuration = new Configuration(); 
83 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
84         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
85         return configuration;
86     }
87 }
复制代码

 

 

 

 

 

 

 

 

 

 

2016-12-11 14:22:14,784 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x19d12e87 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:22:14,801 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x19d12e870x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:22:14,853 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:22:14,855 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:22:14,960 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001c, negotiated timeout = 40000

 

 

 

 

 

 

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478096702098, value=Andy1 
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC 
4 row(s) in 0.5970 seconds

hbase(main):037:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC 
row_05 column=f:address, timestamp=1478097364649, value=beijng 
5 row(s) in 0.2630 seconds

hbase(main):038:0>

 

 

 

   即,这就是,“”的概念,会一直保持

 

 

 

 

 

 

 

 

 

  详细分析

      这里,我设定的是10个线程池,

  其实,很简单,就好比,你来拿一个去用,别人来拿一个去用。等你们用完了,再还回来。(好比跟图书馆里的借书一样)

 

  那有人会问,若我设定的固定10个线程池,都被别人拿完了,若第11个来了,怎办?岂不是,没得拿?

      答案:那你就等着呗,等别人还回来。这跟队列是一样的原理。

  

 

 

 

  这样做的理由,很简单,有了线程池,不需,我们再每次都手动配置文件啊连接zk了。因为,在TableConnection.java里,写好了。

 

 

 

 

 

 

 

2、引用“池”超过

HBase编程 API入门系列之get(客户端而言)(2)

  上面这种方式

 

  为了更进一步,给博友们,深层次明白,“池”的魅力,当然,这也是在公司实际开发里,首推和强烈建议去做的。

 

hbase(main):038:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC 
row_05 column=f:address, timestamp=1478097364649, value=beijng 
5 row(s) in 0.2280 seconds

hbase(main):039:0>

 

 

 

 

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37         
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //        System.out.println(next.toString());
 60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //        }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69         hbasetest.getValue();
 70 }
 71 
 72 
 73 //        public void insertValue() throws Exception{
 74 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 75 //        Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
 76 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
 77 //        table.put(put);
 78 //        table.close();
 79 //        }
 80 
 81  
 82 
 83     public void getValue() throws Exception{
 84         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 85         Get get = new Get(Bytes.toBytes("row_03"));
 86         get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 87         org.apache.hadoop.hbase.client.Result rest = table.get(get);
 88         System.out.println(rest.toString());
 89         table.close();
 90     }
 91 
 92 
 93 
 94     public static Configuration getConfig(){
 95         Configuration configuration = new Configuration(); 
 96 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
 97         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
 98         return configuration;
 99     }
100 }
复制代码

 

 

 

 

 

 

 

 

 

 

2016-12-11 14:37:12,030 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x7660aac9 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:37:12,040 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:37:12,044 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x7660aac90x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:37:12,091 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:37:12,094 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:37:12,162 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001d, negotiated timeout = 40000
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}

 

 

 

 

 

 

 

 

 

 

 

 

3.1、引用“池”超过

HBase编程 API入门系列之delete(客户端而言)(3)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)(4)

 

  上面这种方式

    时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

               先建                              后建

 

 

 

 

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37 
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //            for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //            System.out.println(next.toString());
 60 //            System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //            System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //            System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //        }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70         hbasetest.delete();
 71     }
 72 
 73 
 74 //    public void insertValue() throws Exception{
 75 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 76 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 77 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 78 //        table.put(put);
 79 //        table.close();
 80 //    }
 81 
 82  
 83 
 84 //    public void getValue() throws Exception{
 85 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 86 //        Get get = new Get(Bytes.toBytes("row_03"));
 87 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 88 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 89 //        System.out.println(rest.toString());
 90 //        table.close();
 91 //    }
 92 //    
 93 
 94 
 95     public void delete() throws Exception{
 96         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 97         Delete delete = new Delete(Bytes.toBytes("row_01"));
 98 //        delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 99         delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
100         table.delete(delete);
101         table.close();
102     }
103 
104 
105 
106     public static Configuration getConfig(){
107         Configuration configuration = new Configuration(); 
108 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
109         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
110         return configuration;
111     }
112 }
复制代码

 

 

 

 

 

 

   

delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

     

 

 

 

 

 

 

 

3.2、引用“池”超过

HBase编程 API入门之delete(客户端而言)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)

  上面这种方式

 

  时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

               先建                              后建

 

  

      时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

               先建                              后建

 

 

 

 

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37 
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //            System.out.println(next.toString());
 60 //            System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //            System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //            System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //    }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70         hbasetest.delete();
 71     }
 72 
 73 
 74 //    public void insertValue() throws Exception{
 75 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 76 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 77 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 78 //        table.put(put);
 79 //        table.close();
 80 //    }
 81 
 82  
 83 
 84 //    public void getValue() throws Exception{
 85 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 86 //        Get get = new Get(Bytes.toBytes("row_03"));
 87 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 88 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 89 //        System.out.println(rest.toString());
 90 //        table.close();
 91 //    }
 92 //    
 93 
 94 
 95     public void delete() throws Exception{
 96         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 97         Delete delete = new Delete(Bytes.toBytes("row_01"));
 98         delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 99 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
100         table.delete(delete);
101         table.close();
102 }
103 
104 
105 
106     public static Configuration getConfig(){
107         Configuration configuration = new Configuration(); 
108 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
109         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
110         return configuration;
111     }
112 }
复制代码

 

 

 

 

 

 

 

      

 

 

 

            时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

                                  先建                              后建

 

 delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

 

 

 

 

 

 

 

 

 

 

 

 

 

4、引用“池”超过

HBase编程 API入门之scan(客户端而言)

  上面这种方式

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37 
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //            for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //                System.out.println(next.toString());
 60 //                System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //                System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //                System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //            }
 64 //        }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70 //        hbasetest.delete();
 71         hbasetest.scanValue();
 72     }
 73 
 74 
 75 //    public void insertValue() throws Exception{
 76 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 77 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 78 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 79 //        table.put(put);
 80 //        table.close();
 81 //    }
 82 
 83  
 84 
 85 //    public void getValue() throws Exception{
 86 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 87 //        Get get = new Get(Bytes.toBytes("row_03"));
 88 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 89 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 90 //        System.out.println(rest.toString());
 91 //        table.close();
 92 //    }
 93 //    
 94 
 95 
 96 //    public void delete() throws Exception{
 97 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 98 //        Delete delete = new Delete(Bytes.toBytes("row_01"));
 99 //     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
100 ////    delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
101 //        table.delete(delete);
102 //        table.close();
103 //    }
104 
105 
106     public void scanValue() throws Exception{
107         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
108         Scan scan = new Scan();
109         scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
110         scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
111         scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
112         ResultScanner rst = table.getScanner(scan);//整个循环
113         System.out.println(rst.toString());
114         for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
115             for(Cell cell:next.rawCells()){//某个row key下的循坏
116                 System.out.println(next.toString());
117                 System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
118                 System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
119                 System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
120             }
121         }
122         table.close();
123     }
124 
125 
126 
127     public static Configuration getConfig(){
128         Configuration configuration = new Configuration(); 
129 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
130         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
131         return configuration;
132     }
133 }
复制代码

 

 

 

 

 

 

 

 

 

2016-12-11 15:14:56,940 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x278a676 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 15:14:56,955 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 15:14:56,958 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x278a6760x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 15:14:57,015 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 15:14:57,018 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 15:14:57,044 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c50024, negotiated timeout = 40000
org.apache.hadoop.hbase.client.ClientScanner@4362f2fe
keyvalues={row_02/f:name/1478095849538/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy2
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy3

 

 

 

  好的,其他的功能,就不带领大家去做了,自行去研究。

最后,总结:

  在实际开发中,一定要掌握线程池!!!

 

附上代码

 

package zhouls.bigdata.HbaseProject.Pool;

 

import java.io.IOException;

 

import zhouls.bigdata.HbaseProject.Pool.TableConnection;

 

import javax.xml.transform.Result;

 

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTableInterface;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

 

public class HBaseTest {

 

public static void main(String[] args) throws Exception {
// HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
// Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
// put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
// table.put(put);
// table.close();

// Get get = new Get(Bytes.toBytes("row_04"));
// get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();

// Delete delete = new Delete(Bytes.toBytes("row_2"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
// table.delete(delete);
// table.close();

 


// Delete delete = new Delete(Bytes.toBytes("row_04"));
//// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();


// Scan scan = new Scan();
// scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
// scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
// scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// ResultScanner rst = table.getScanner(scan);//整个循环
// System.out.println(rst.toString());
// for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
// {
// for(Cell cell:next.rawCells()){//某个row key下的循坏
// System.out.println(next.toString());
// System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
// System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
// System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
// }
// }
// table.close();

HBaseTest hbasetest =new HBaseTest();
// hbasetest.insertValue();
// hbasetest.getValue();
// hbasetest.delete();
hbasetest.scanValue();

}


//生产开发中,建议这样用线程池做
// public void insertValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
// table.put(put);
// table.close();
// }

 



//生产开发中,建议这样用线程池做
// public void getValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Get get = new Get(Bytes.toBytes("row_03"));
// get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();
// }
//

 

//生产开发中,建议这样用线程池做
// public void delete() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Delete delete = new Delete(Bytes.toBytes("row_01"));
// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
//// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();
//
// }

 

//生产开发中,建议这样用线程池做
public void scanValue() throws Exception{
HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
Scan scan = new Scan();
scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
ResultScanner rst = table.getScanner(scan);//整个循环
System.out.println(rst.toString());
for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
{
for(Cell cell:next.rawCells()){//某个row key下的循坏
System.out.println(next.toString());
System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
}
}
table.close();
}



public static Configuration getConfig(){
Configuration configuration = new Configuration(); 
// conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
return configuration;
}
}

 

 

 

 

 

 

package zhouls.bigdata.HbaseProject.Pool;

import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HConnection;
import org.apache.hadoop.hbase.client.HConnectionManager;


public class TableConnection {
private TableConnection(){
}
private static HConnection connection = null;
public static HConnection getConnection(){
if(connection == null){
ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
try{
connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
}catch (IOException e){
}
}
return connection;
}
}

 

 



本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6159427.html,如需转载请自行联系原作者

相关实践学习
云数据库HBase版使用教程
&nbsp; 相关的阿里云产品:云数据库 HBase 版 面向大数据领域的一站式NoSQL服务,100%兼容开源HBase并深度扩展,支持海量数据下的实时存储、高并发吞吐、轻SQL分析、全文检索、时序时空查询等能力,是风控、推荐、广告、物联网、车联网、Feeds流、数据大屏等场景首选数据库,是为淘宝、支付宝、菜鸟等众多阿里核心业务提供关键支撑的数据库。 了解产品详情:&nbsp;https://cn.aliyun.com/product/hbase &nbsp; ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库&nbsp;ECS 实例和一台目标数据库&nbsp;RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&amp;RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
相关文章
|
3月前
|
API C++
socket编程之常用api介绍与socket、select、poll、epoll高并发服务器模型代码实现(1)
前言   本文旨在学习socket网络编程这一块的内容,epoll是重中之重,后续文章写reactor模型是建立在epoll之上的。
34 0
|
1月前
|
API 开发工具 开发者
抖音商品详情API入门:为开发者和商家打造增长工具箱
抖音商品详情API入门:为开发者和商家打造增长工具箱
51 0
|
3月前
|
监控 安全 Linux
socket编程之常用api介绍与socket、select、poll、epoll高并发服务器模型代码实现(3)
高并发服务器模型-poll poll介绍   poll跟select类似, 监控多路IO, 但poll不能跨平台。其实poll就是把select三个文件描述符集合变成一个集合了。
35 0
|
6天前
|
存储 Java 关系型数据库
掌握Java 8 Stream API的艺术:详解流式编程(一)
掌握Java 8 Stream API的艺术:详解流式编程
35 1
|
23天前
|
算法 Linux API
【Linux系统编程】一文了解 Linux目录的创建和删除API 创建、删除与读取
【Linux系统编程】一文了解 Linux目录的创建和删除API 创建、删除与读取
28 0
【Linux系统编程】一文了解 Linux目录的创建和删除API 创建、删除与读取
|
30天前
|
Linux API C++
【Linux C/C++ 线程同步 】Linux API 读写锁的编程使用
【Linux C/C++ 线程同步 】Linux API 读写锁的编程使用
18 1
|
2月前
|
前端开发 JavaScript API
前端秘法番外篇----学完Web API,前端才能算真正的入门
前端秘法番外篇----学完Web API,前端才能算真正的入门
|
3月前
|
JSON 安全 数据挖掘
从入门到精通:淘宝API接口调用全攻略
概述: 在当今电子商务的繁荣发展下,淘宝作为中国领先的电商平台,不仅为消费者提供了便捷的购物环境,也为商家们提供了强大的数据支持和服务能力。淘宝开放平台提供的API接口使得商家能够高效地获取店铺和商品的实时数据,从而更好地分析市场趋势、优化店铺运营、提升用户体验。本文将详细介绍如何从入门到精通地调用淘宝API接口,使商家能够充分利用这一强大工具推动业务增长。
|
3月前
|
JSON Java API
Java 编程问题:十三、HTTP 客户端和 WebSocket API
Java 编程问题:十三、HTTP 客户端和 WebSocket API
|
3月前
|
JavaScript 前端开发 IDE
Vue3【为什么选择Vue框架、Vue简介 、Vue API 风格 、Vue开发前的准备 、Vue项目目录结构 、模板语法、属性绑定 、 】(一)-全面详解(学习总结---从入门到深化)
Vue3【为什么选择Vue框架、Vue简介 、Vue API 风格 、Vue开发前的准备 、Vue项目目录结构 、模板语法、属性绑定 、 】(一)-全面详解(学习总结---从入门到深化)
48 1