Implementation of basic Hadoop commands
Jan 07, 2020
Hadoop commands,
6746 Views
In This Article, we'll discuss Implementation of basic Hadoop commands
start-all.sh: Used to start Hadoop daemons all at once. Issuing it on the master machine will start the daemons on all the nodes of a
Syntax: start-all.sh
![](../../Article_img/Blog_image/d9babb3f-eaee-42f4-8a87-8882e884c88f.jpg)
jps(Java Virtual Machine Process Status Tool): JPS is a command is used to check all the Hadoop daemons like NameNode, DataNode, ResourceManager, NodeManager etc. which are running on the machine.
Syntax: jps
![](../../Article_img/Blog_image/cc569a6b-0607-4b74-b1cb-a0af88028bfe.jpg)
hadoop version: This command is use to check the version of Hadoop in which you are currently working.
Syntax: hadoop version
![](../../Article_img/Blog_image/7c632417-2ac4-4438-a1ba-0c148d235bab.jpg)
Java -version: This command is use to check the java version in which you are currently
Syntax: java version
![](../../Article_img/Blog_image/14b18a5a-f0c2-4593-afa9-1b2d17ffbb14.jpg)
mkdir:This command is use to make directories.
Syntax: hdfs dfs -mkdir [-p] <paths>
![](../../Article_img/Blog_image/f6364963-b7ae-4aac-b9ac-8a9553c3dbc9.jpg)
Now to check whether your directory created at the remote location or not you need to go to browser and type
localhost:9870/explorer.html#/
.![](../../Article_img/Blog_image/19dd2d03-2ddd-4002-be89-6f0ab997bda3.jpg)
gedit: It is the default text editor of the GNOME desktop environment. One of the neatest features of this program is that it supports tabs, so you can edit multiple
Syntax:
gedit <file_name>
![](../../Article_img/Blog_image/7062971b-0bd7-4b5f-8783-e01dbffbd53a.jpg)
![](../../Article_img/Blog_image/87b5c406-e301-4d02-be37-37acd7d7449f.jpg)
put: This command is used to copy files from the local file system to the HDFS filesystem. . That is this command is use to upload file in directory which is created on remote
Syntax:
hdfs dfs -put <localsrc> <destination>
After that you need to check that your file shown at the remote location in that which you created prior.
![](../../Article_img/Blog_image/61a289a3-5de2-4cd9-8b12-e36b1d181d9e.jpg)
cat: HDFS Command that reads a file on HDFS and prints the content of that file to the standard output.
Syntax: Usage:
hdfs dfs -cat URI [URI ...]
![](../../Article_img/Blog_image/f5d9b7be-5591-4dd8-ab85-d2ee69819643.jpg)
expunge: HDFS Command that makes the trash empty.
Syntax:
hdfs dfs –expunge
![](../../Article_img/Blog_image/74579b06-00cb-4ec0-8fad-fa06cf93b981.jpg)
rm: Used to stop hadoop daemons all at once. Issuing it on the master machine will stop the daemons on all the nodes of a cluster.
Syntax:
hdfs dfs -rm /[dirname]/[filename]
![](../../Article_img/Blog_image/4d7451ee-80ad-4b30-91f6-efaa1f8abc41.jpg)
cp: This command is use to copy data from one source directory to another.
Syntax:
hdfs dfs -cp [source] [destination]
![](../../Article_img/Blog_image/08625efc-e848-46de-8a32-58df17493d8a.jpg)
tail: This command is used to show the last 1KB of the file.
Syntax:
hdfs dfs –touchz /directory/filename
mv: This command is similar to the UNIX mv command, and it is used for moving a file from one directory to another directory within the HDFS file system.
Syntax:
hdfs dfs -mv /[source_dir_name]/[file_name]/[destination_dir name]
stop-all.sh: Used to stop hadoop daemons all at once. Issuing it on the master machine will stop the daemons on all the nodes of a cluster.
Syntax:
stop-all.sh
![](../../Article_img/Blog_image/c649fba4-ba34-4c56-bf48-63f6b7464ed9.jpg)