1. Deleting Nested Folders
With the rm command you can remove (unlink) files and folders from your hard drive. But what about a whole lot of nested folders too? Especially if each folder set contains subsequent files and mismatched data. The option -r will recursively flip through all subsequent files and folders to remove the data and directories.
If you add in the -f option this forces the prompt to stay within your commands and not prompt you for any dialog. There is no return output and will bypass nonexistent files in all sub-directories. The whole command in action may look like this:
rmdir -r -f /home/you/documents/mydir1/2016
2. Connecting to a Database
When you are accessing a website backend system frequently you’ll want to ensure a safe connection is created. This goes double for database connections where website and user information is stored. But if you’re working with a local database install you can probably get away with a lot less security requirements.
Depending on the system you are using there will be different syntax to adjust. The basic call to connect into a database is still generally the same. You will need the name of the database you’re accessing, your username, password, and possibly the database hostname(usually localhost). I’ve added two shell commands to connect into, one for MySQL and the other for Sybase.
mysql -u myusername -h localhost -p
Here you would simply hit enter with no password provided. Then if the shell command successfully accesses that database and host it’ll prompt for your password. Enter this on the new line and hit enter again. MySQL will welcome you upon success.
isql -U myusername -P <<EOF
use gdb_1
EOF
Sybase is another great example of database software. You can access these types of databases with the isql command similar to the mysql above. Here you are only providing a username and password, and then calling the use command to pick your database.
3. Restore a Database
Now we come to restoring the backup of a database file. This isn’t as complicated as you might think, although from the looks of the previous code I can understand why. But consider that it’s a lot easier to upload previous files than to connect and pull down data from a remote server.
In Sybase you’ll be doing a lot more work in shell. But the basic command is load database dbname. You can follow this up with further options, and of course you’ll need to be connected into the database before this will work. If you’re stuck try using the Sybase documentation file as a reference point.
With MySQL you only need a single command if you’re already logged in. Or even if you aren’t you may connect and call the restore simultaneously. This is because the backup of any MySQL database file is basically SQL code which can reconstruct the database from scratch. This is the reason some backups are enormously large and oftentimes too big to upload via web interface like phpMyAdmin.
You can call the mysql command with a single line. As before you enter -u and -p but only fill in your username since your password is prompted afterwards. The code below should work perfectly:
mysql -u username -p database < /path/to/dump_file.sh
The only variables you’ll want to replace are username, database, and your backup path. The username and database host are the same as before when you connected. So you’ll only need to find where your database backup is stored so you can update it.
4. Direct Shell Downloads
The wget command is very interesting and offers a lot of options. GNU wget is a non-interactive utility to download files from the Internet. This includes standard HTTP, HTTPS, and FTP protocols in the mix.
To download a basic file you would type wget filename where filename is the location of your file. This could be anything online such as http://media02.hongkiat.com/v4s/n_logo.gif for the Hongkiat .gif logo file. If you create a shell script file holding many variables you can download large batch videos, images, music, or other content in the background while you work. And keep in mind you can use wildcards here such as * and ? to pull large directories of files.
Now you may also wish to download contents via FTP. However much of the time you won’t be working with public ftp servers and will need a username/password. The login syntax is a bit confusing, but I’ve added a small example below.
wget ftp://username:password@ftp.mywebsite.com/files/folder/*.jpg
5. Compress Folders
We had gone over compressions a bit earlier, but merely in description. There are some very basic primitive examples of file compression which you can call from the command line anywhere. I recommend using the zip command if you are new to Shell, only because the Linux system can get confusing. However if you’d like to use gzip or another alternative feel free.
Whenever you call a complete zip command you’ll want to include all the files within your new archive. The second parameter from a zip command is the folder you’d like, or alternatively a short list of files to zip. Adding the -r option recursively traverses your directory structure to include every file. Below is the perfect example of a small folder compression.
zip -r newfile_name.zip /path/to/content/folder
6. Mass Find and Replace
Whenever you have a large collection of files you’ll often have them labeled or numbered in a similar pattern. For example, with a large collection of website banners they may all include the ‘banner’ prefix or suffix. This could be mass replaced in all files with the shell sed command.
sed is a stream editor which is used to perform basic text transformations and edits on files. It is known as the most efficient command since it will sweep through a single directory almost instantaneously. Below is some example code using the command.
sed -i ‘s/abc/xyz/g’ *.jpg
So above we would be matching for nonexistent files, but in our example we’re looking to replace a set of images. We look in the directory and plan to replace all .jpg images which contain abc and substitute xyz. With the -i option we can edit files in place automatically with no backup requirements. Have a quick peek at the sed documentation for more info.
7. Create New Files
It can be pesky to create a whole heap of the same files in one sitting. If you would like to create a large set of documents or text files without using software, the command line is a great tool. Consider some of the editors at your disposal directly from shell.
vi/vim is possibly the best and most useful editor for Linux CLI. There are others such as JOE text editor. You could also create a file from the cat command, although you’d be limited to only viewing file contents and not editing anything.
With vi you’ll only need to call a single line of code. I’ve added the code below which is simply vi command followed by your new filename. Once you are in vi editor type ‘i’ to edit and insert new text. To save and exit a file press the esc key followed by colon+x (:+x) and hit enter. It’s a strange combination, but it’s awfully secure and once you get the hang of things you never want to go back!
vi /home/you/myfile.doc
8. Package Management
When working with software installs via Shell you’ll be mainly working within 2 different versions of Unix. RPM Package Manager (RPM) and Debian Manager (DEB) are the most widely known versions. These are kept up to date with the latest packages which you can download from the closest mirror site.
The commands are very similar to install on either version. yum and rpm are the two commands reserved for the former package manager. Their code follows yum command package-name. So for example:
yum install package-name
For Debian/Ubuntu users you’ll be using the Debian Package Manager. Again the syntax follows a similar format where you call the package manager ID, the command, and follow it all up with a package name. The two examples below are formatted for an install and upgrade, respectively.
apt-get install package-name
apt-get upgrade mypackage1
9. Generate List of Largest Files
Organization is what keeps you running at all hours of your work sessions. When you start to lose track of files and notice your directories getting too large it’s time for some Spring cleaning. The ls command is very useful in shell as it gives you a greater perspective into some of your directories.
This includes sorting specific types of files and file formats. If you’d like to find the biggest files in any directory on your HDD simply apply the command below.
ls -lSrh
There are 4 separate options attached to this command. -l is used to list full output data. -S will sort the entire list by file size, initially from largest to smallest. By applying -r we then reverse the sort order so the largest files in your output will end up at the bottom. This is good since the shell window will leave you at the very bottom of your output commands anyways, so it’s easier to clear through the list. -h simply stands for human readable output data so you’ll see file size in megabytes(MB) instead of bytes.
10. Create an E-mail On-The-Fly
If you are using any software for your e-mail accounts this command will save you loads of time. Often you know a person’s e-mail address you’re looking to send but don’t want to spend your time opening your mail client. the mailto: command will work exactly the same from command line as from any browser or website.
Even if you don’t know the address you’re looking to send, just add in anything. noreply@nothing.com works great! Or be creative with your own filler content. Either way after you type this hit enter to pop open a brand new e-mail message window with your sender address. You can modify your subject/body and CC’s to your own needs all within a quick instant.
mailto:noreply@cyberphoton.com