首页 > 一:HDFS 用户指导

一:HDFS 用户指导

1.hdfs的牛逼特性

  • Hadoop, including HDFS, is well suited for distributed storage and distributed processing using commodity hardware. It is fault tolerant, scalable, and extremely simple to expand. MapReduce, well known for its simplicity and applicability for large set of distributed applications, is an integral part of Hadoop. 分布式存储
  • HDFS is highly configurable with a default configuration well suited for many installations. Most of the time, configuration needs to be tuned only for very large clusters. 适当的配置
  • Hadoop is written in Java and is supported on all major platforms. 平台适应性
  • Hadoop supports shell-like commands to interact with HDFS directly. shell-like的操作方式
  • The NameNode and Datanodes have built in web servers that makes it easy to check current status of the cluster. 内置web服务,方便检查集群
  • New features and improvements are regularly implemented in HDFS. The following is a subset of useful features in HDFS:
    • File permissions and authentication.  文件权限验证
    • Rack awareness: to take a node's physical location into account while scheduling tasks and allocating storage.
    • Safemode: an administrative mode for maintenance.  安全模式,用于运维
    • fsck: a utility to diagnose health of the file system, to find missing files or blocks.  检查文件系统的工具,发现丢失的文件或者块
    • fetchdt: a utility to fetch DelegationToken and store it in a file on the local system.
    • Balancer: tool to balance the cluster when the data is unevenly distributed among DataNodes.
    • Upgrade and rollback: after a software upgrade, it is possible to rollback to HDFS' state before the upgrade in case of unexpected problems.
    • Secondary NameNode: performs periodic checkpoints of the namespace and helps keep the size of file containing log of HDFS modifications within certain limits at the NameNode.
    • Checkpoint node: performs periodic checkpoints of the namespace and helps minimize the size of the log stored at the NameNode containing changes to the HDFS. Replaces the role previously filled by the Secondary NameNode, though is not yet battle hardened. The NameNode allows multiple Checkpoint nodes simultaneously, as long as there are no Backup nodes registered with the system.
    • Backup node: An extension to the Checkpoint node. In addition to checkpointing it also receives a stream of edits from the NameNode and maintains its own in-memory copy of the namespace, which is always in sync with the active NameNode namespace state. Only one Backup node may be registered with the NameNode at once.

      来源: http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html
2.webUI
默认是50070端口
3.hdfs基本管理命令
bin/hdfs dfsadmin -参数
  • -report: reports basic statistics of HDFS. Some of this information is also available on the NameNode front page. 报告状态
  • -safemode: though usually not required, an administrator can manually enter or leave Safemode.  开启安全模式
  • -finalizeUpgrade: removes previous backup of the cluster made during last upgrade. 删除上次集群更新时的备份
  • -refreshNodes: Updates the namenode with the set of datanodes allowed to connect to the namenode. Namenodes re-read datanode hostnames in the file defined bydfs.hostsdfs.hosts.exclude. Hosts defined in dfs.hosts are the datanodes that are part of the cluster. If there are entries in dfs.hosts, only the hosts in it are allowed to register with the namenode. Entries in dfs.hosts.exclude are datanodes that need to be decommissioned. Datanodes complete decommissioning when all the replicas from them are replicated to other datanodes. Decommissioned nodes are not automatically shutdown and are not chosen for writing for new replicas.
  • -printTopology : Print the topology of the cluster. Display a tree of racks and datanodes attached to the tracks as viewed by the NameNode. 打印拓扑
4.secondary namenode 
namenode把文件系统的修改以日志追加方式写到本地文件系统,namenode启动时,先从镜像中读取HDFS的状态,然后再把日志中的修改合并到镜像中,再打开一个新的日志文件接收新的修改。namenode仅仅在启动时才合并状态镜像和日志,所以日志可能会变的非常大,在下次启动时需要合并的内容太多导致启动时间很长。
secondary namenode定时的从namenode合并日志,并且保证日志大小限制在一定的范围内。一般不和主namenode放一起,但机器的配置要和namenode一样。
secondary namenode上的checkpoint 里程由以下两个参数控制:
  • dfs.namenode.checkpoint.period, set to 1 hour by default, specifies the maximum delay between two consecutive checkpoints, and
  • dfs.namenode.checkpoint.txns, set to 1 million by default, defines the number of uncheckpointed transactions on the NameNode which will force an urgent checkpoint, even if the checkpoint period has not been reached.
dfs.namenode.checkpoint.preiod  两次执行checkpoint之间的最大时间间隔
dfs.namenode.checkpoint.txns    当没有checkpoint的事务达到多少时执行,即使未达到上面的参数设置的时间,默认是100万(比如10分钟修改了100万个,那么10分钟就执行一次checkpoint而非1小时)
5.checkpoint node
和secondary namenode极为相似,不同的地方是checkpoint下载hdfs状态镜像和日志文件,并在本地合并,合并后还上传到正在运行的namenode.
dfs.namenode.backup.address       地址
dfs.namenode.backup.http-address  ip端口
dfs.namenode.checkpoint.preiod 和dfs.namenode.checkpoint.txns  同样影响checkpoint
checkpoint node和secondary namenode实际上就是一个东西,只是名称有所不同
6.backup node
backup node的功能和checkpoint node一样,但是backup node能实时的从namenode读取namespace变化数据并合并到本地(注意:namenode是不合并,只有重启后才合并),所以backup node是namenode的完全实时备份。
目前一个集群只能有一个backup node,未来可以支持多个。一旦有个backup node,checkpoint node就无法再注册进集群。backup node的配置文件和checkpoint一致(dfs.namenode.backup.address  dfs.namenode.backup.http-address),以bin/hdfs namenode -backup启动
7.import checkpoint
如果镜像文件和日志文件丢失,可以用import checkpoint方式从checkpoint节点读取。需要配置三个参数:
dfs.namenode.name.dir namenode的元数据文件夹
dfs.namenode.checkpoint.dir checkpoint node上传镜像的文件夹
以-importCheckpoint的方式启动namenode 
8.balancer
HDFS中数据可能不是均衡的放在集群中。考虑到一下情况:
  • Policy to keep one of the replicas of a block on the same node as the node that is writing the block.  在当前读写的节点中保存一个数据备份。
  • Need to spread different replicas of a block across the racks so that cluster can survive loss of whole rack. 保存数据分布到各个机架,可以允许整个机架的丢失
  • One of the replicas is usually placed on the same rack as the node writing to the file so that cross-rack network I/O is reduced. 
  • Spread HDFS data uniformly across the DataNodes in the cluster.

    来源: http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer
9.机架感知,略
10.safemode
当集群重新启动时,namenode读取状态镜像和日志信息,此时namenode等待datanode报告块信息,所以不会立即打开集群,此时namenode处于safemode,集群处于只读状态。等datanode报告完块信息后,集群自动打开,解除safemode状态。可以手动设置safemode状态。
11.fsck
fsck命令用来检查文件(文件块)不一致,与传统的fsck不一样的地方是,该命令并不修正错误,默认下不检查已经打开的文件.fsck命令不是hadoop shell 命令,但是可以以bin/hdfs fsck启动.
12.fecthdt
HDFS支持fecthdt命令来读取口令并存放在本地文件系统中.该口令可用于非安全验证的客户端连接到安全的服务器上(比如namenode).略..
13.recovery mode
恢复模式.如果仅有的namemode元数据丢失了,可以通过recovery mode找到部分数据,此时以namenode -recover启动namenode,然后按照提示输入文件位置,可以使用force参数不输入让hdfs自己找文件位置
14.upgrade and rollback
升级和回滚.略
15.File permissions and security 
文件权限和安全.HDFS的文件权限类似LINUX.启动namenode的用户被视为HDFS的超级用户.
16.可扩展性
HDFS可以支持数千个节点的集群.每个集群只有一个namenode,因此namenode的内存成为集群大小的限制





来自为知笔记(Wiz)





转载于:https://www.cnblogs.com/skyrim/p/7455503.html

更多相关:

  • 上篇笔记中梳理了一把 resolver 和 balancer,这里顺着前面的流程走一遍入口的 ClientConn 对象。ClientConn// ClientConn represents a virtual connection to a conceptual endpoint, to // perform RPCs. // //...

  • 我的实验是基于PSPNet模型实现二维图像的语义分割,下面的代码直接从得到的h5文件开始往下做。。。 也不知道是自己的检索能力出现了问题还是咋回事,搜遍全网都没有可以直接拿来用的语义分割代码,东拼西凑,算是搞成功了。 实验平台:Windows、VS2015、Tensorflow1.8 api、Python3.6 具体的流程为:...

  • Path Tracing 懒得翻译了,相信搞图形学的人都能看得懂,2333 Path Tracing is a rendering algorithm similar to ray tracing in which rays are cast from a virtual camera and traced through a s...

  • configure_file( [COPYONLY] [ESCAPE_QUOTES] [@ONLY][NEWLINE_STYLE [UNIX|DOS|WIN32|LF|CRLF] ]) 我遇到的是 configure_file(config/config.in ${CMAKE_SOURCE_DIR}/...

  •     直接复制以下代码创建一个名为settings.xml的文件,放到C:UsersAdministrator.m2下即可