Hadoop中的FileStatus、BlockLocation、LocatedBlocks、InputSplit

  1. 云栖社区>
  2. 博客>
  3. 正文

Hadoop中的FileStatus、BlockLocation、LocatedBlocks、InputSplit

徐胖子 2016-12-12 21:41:00 浏览817
展开阅读全文

1 FileStatus

1.1 包名
org.apache.hadoop.fs.FileStatus

1.2 格式

FileStatus{path=hdfs://192.X.X.X:9000/hadoop-2.7.1.tar.gz; isDirectory=false; length=210606807; replication=3; blocksize=134217728; modification_time=xxx; access_time=xxx; owner=xxx; group=supergroup; permission=rw-r--r--; isSymlink=false}


2 BlockLocation

2.1 包名
org.apache.hadoop.fs.BlockLocation

2.2 调用处
JobClient的writeNewSplits方法,其中调用了List<InputSplit> splits = input.getSplits(job)方法,在getSplits方法中调用了getFileLocation()。

2.3 格式

public static void getFileLocation() throws Exception {
    Configuration conf = new Configuration();
	Path fpath = new Path(AConstants.hdfsPath + "hadoop-2.7.1.tar.gz");
	FileSystem hdfs = fpath.getFileSystem(conf);
	FileStatus filestatus = hdfs.getFileStatus(fpath);
	BlockLocation[] blkLocations = hdfs.getFileBlockLocations(filestatus,0, filestatus.getLen());
	System.out.println("total block num:" + blkLocations.length);
	for (int i = 0; i < blkLocations.length; i++) {
		System.out.println(blkLocations[i].toString());
		System.out.println("文件在block中的偏移量" + blkLocations[i].getOffset()
                + ", 长度" + blkLocations[i].getLength());
	}
}
total block num:2
0,134217728,192.X.X.X
文件在block中的偏移量0, 长度134217728
134217728,76389079,192.X.X.X
文件在block中的偏移量134217728, 长度76389079


splits数组信息

int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
splits.add(makeSplit(path, length-bytesRemaining,
splitSize,blkLocations[blkIndex].getHosts(),blkLocations[blkIndex].getCachedHosts()));
[hdfs://192.x.x.x:9000/hadoop-2.7.1.tar.gz:0+134217728, 
hdfs://192.x.x.x:9000/hadoop-2.7.1.tar.gz:134217728+31987190]


3 LocatedBlocks

3.1 包名
org.apache.hadoop.hdfs.protocol.LocatedBlocks

3.2 调用处
在hdfs读取文件时调用openInfo()方法,最终调用的是DFSInputStream的fetchLocatedBlocksAndGetLastBlockLength方法获取块信息LocatedBlocks。块的信息非常详尽,如块名称,大小,起始偏移量,IP地址等。

在hadoop中写文件实际是把block写入到datanode中,而namenode是通过datanode定期的汇报得知该文件到底由哪几个block组成的。因此在读某个文件时可能存在datanode还未汇报给namenode的情况,因此在读文件时只能读到最后一个汇报的block块。isLastBlockComplete可以标识是否读取到最后的块。若不是则会根据元数据提供的block的pipeline来到datanode上获得block的写入长度,并赋值给lastBlockBeingWrittenLength。

3.3 格式

LocatedBlocks {
	fileLength = 210606807
		underConstruction = false
		blocks = [LocatedBlock {
				BP - 1853423215 - 192.X.X.X - 1474747765776: blk_1073741828_1004;
				getBlockSize() = 134217728;
				corrupt = false;
				offset = 0;
				locs = [192.X.X.X: 50010]
			}, LocatedBlock {
				BP - 1853423215 - 192.X.X.X - 1474747765776: blk_1073741829_1005;
				getBlockSize() = 76389079;
				corrupt = false;
				offset = 134217728;
				locs = [192.X.X.X: 50010]
			}
		]
		lastLocatedBlock = LocatedBlock {
		BP - 1853423215 - 192.X.X.X - 1474747765776: blk_1073741829_1005;
		getBlockSize() = 76389079;
		corrupt = false;
		offset = 134217728;
		locs = [192.X.X.X: 50010]
	}
	isLastBlockComplete = true
}

网友评论

登录后评论
0/500
评论
徐胖子
+ 关注