Archive for the ‘BSD/Linux/Unix’ Category

Running the script you are currently editing in vi

Tuesday, April 24th, 2012

Instead of doing all the typing involved with leaving your vi session to run the script you just edited, you could simply type

:!%

“%” is a shortcut for the current filename. For this to work, the current file (%) should be in your $PATH. Or if that’s inconvenient or unwise, you can always just make the call to the file’s absolute path by substituting % with the absolute path of the file. The file should be executable, which, if it isn’t, you change by typing

:!chmod +x %

Of course, if you have just created the file, you’ll need to do a :w to actually commit the file to the filesystem, before you can make any calls to it.

Remember to use

:!!

to repeat the last command you just ran from within vi to save yourself more typing.

VI Cheat Sheet

Monday, May 17th, 2010

Replace commas (or something) with line breaks (or something)

:%s/,/^M/g
where ^M = ctrl+V & ctrl+M

Use vi to comment out (or do something else to) several consecutive lines

Example: If you wanted to comment out lines 20 to 40 of a file, use the following:

:20,40s/^/# /

Use

:set number

to enable visible line numbers.

Problem with “terminal too wide” been plaguing you on those farty old Solaris boxes for the last 6.5 years but you were usually too busy doing something else to stop it from ever happening again?

Tuesday, May 11th, 2010

Well you can install VIM, or if that’s not practical (or if you’re not in the mood to install VIM and its required dependencies on the hundred or so boxes you log into), then the command “stty columns 120” will do ya.

Stupid Unix Tricks: Creating Files

Thursday, October 1st, 2009

Typically I create files using the vi editor. vi, unlike just about any other editor in the *nix world, is ubiquitous –even when running in stripped down systems and in single-user mode, so it’s worth knowing how to use for basic editing. The great thing about vi is that the more you work with it (and spend time learning about it) the more you discover it can do to make your life easier. I will write more about this in another post.

In addition to vi, another way I quickly create files on *nix systems is with touch.

$ touch newfile.txt

This creates an empty file that you can later append info or edit, what have you. Touch can change an existing file’s timestamp without altering the original file’s contents too, in fact that may be touch’s raison d’etre, but I almost always use it to create new, empty files when I need to (or when I need to test the umask settings of the user I’m logged in as).

Another quick and dirty file creation method is with simple I/O redirection. To create a new, empty, file:

$ >newfile.txt

To create that file with a blurb in in,

$ echo blurb > newfile.txt

(newfile.txt will contain the world blurb), or

$ echo “Longer blurb with more words than the original one we created” > newfile.txt

(the sentence in quotes will be in the file. Note: with I/O redirection, the > character will overwrite/clobber the contents of the file on the “less than” side of the operator, so be careful when using this that “newfile.txt” or whatever you’re redirecting to doesn’t have anything important in it….or you can use >> to append, rather than overwrite/clobber the file.

This is cool, and you can do a lot with the echo command and escape sequences that will allow you to do some level of formatting with the contents of the new file. But if you need formatting why not use an editor, or editor-like functionality? Leaving vi and other editors aside, the simple cat command (short for concatenate) with some I/O redirection can be pretty cool, and not quite as overkill as the full vi editor (and perhaps a few less keystrokes from start to finish).

$ cat > newfile.txt

(creates the file, but you are still concatenating, so…)

type your message/write your script here

create a new paragraph if you like

get fruity with formatting if you need

^D (ctrl + D) to terminate with EOF (end of file)

Voila, you have a file with the contents you just typed in cat mode. It’s nothing fancy, but if you want to whip up a quick and dirty script, it’s one way to get started.

Pretty stupid stuff, huh?

Rant: Sun’s Web Site

Thursday, August 14th, 2008

I sent a message to Sun after a frustrating afternoon trying to get a patch cluster to patch my personal Sun server. The grammar and wording sucks, but I was completely pissed (and it’s been building up over the last 3-4 years trying to use their awful web site).

Subject: Your Website: Awful!

I’m trying to download the 10_Recommended patch cluster. 3-4 successive attempts to download the 600+ MB file completed after 220MB leaving me with a corrupt file. It took me 5-10 minutes to actually even find the page containing the patch cluster to begin with. Then I learned that I needed to be logged in to download the cluster. But I wasn’t given a login prompt, I was just given an error page. I had to go back and search the previous pages for a login prompt. Overall your site has, for the last 2-3 years, completely sucked. Why don’t you help the people who use your products by making things easier to find, and making them available (not by filling our hard drives with corrupt zip files)? Also, don’t make people log in to read your precious tech docs and spec sheets. That’s completely stupid too.

I used to think the people I worked for were crazy for dumping Sun for Dell/Windows, but now I see that you seem to be too inadequate to handle the business people give you. I guess you’ll be going the way of Sco Unix before long. Can’t say I’ll be that sorry to see you go when you do.

Catastrophic Personal Data Loss

Wednesday, November 21st, 2007

Do you know how much space all the files you’ve been collecting and saving for the last 10 years takes up? You know what I’m talking about, all your backed up files, your contacts, old emails, old correspondence, old Word docs, your various downloads, your various license keys/serial numbers, every digital photo you’ve ever taken or had taken of you, your pr0n, your vast collection of ebooks & audio books, your wicked collection of ultra cool and ultra rare (out-of-print) music books, sheet music, play-along CDs, backing tracks, tabs, magazine scans, etc., etc.?

In my case it was 255GB (yes, gigabytes, not megabytes). The reason I know is because although I’ve always been careful about backing this huge store of amassed information to a second hard drive across the network (thus having two live copies always available), I (along with the help of a failing <brand> hard drive), may have just wiped it all off the face of the earth in a couple of stupid, fell swoops.

I always slept well at night knowing I have done a recent rsync to sync up any changes made/files added from my master data drive to my backup data drive. Why just this month I was in there tagging some of the (complete collection of) Aebersold CDs and renaming things to a more logical, easy to find system. Piles of this kind of data take hours of painstaking work to categorize and organize. It’s something I’ve never felt finished with and am always in there tinkering to make the system better, finding new ways to automate the process, etc. etc. It’s like a rather large hobby of mine. I guess I’m kind of an archivist at heart.

Well, yesterday I realized it’s quite possible I’ve lost it all to a couple of seriously boneheaded actions on my part and a surprise disk failure on a 320GB “back up” hard drive on my LAN’s file server (running Ubuntu server + Samba). It started with my other post from this week about Windows Vista. I was in my main workstation (2 system disks that are smaller 10K RPM raptors for the OSes + 2 data 500GB data disks full of audio + 1 320GB external USB drive on which these data files reside). I was smart enough to unplug the 500GB drives before messing around with the OS b/c I’ve had enough experience to know that it’s easy to wipe one of these when messing around with partitioning software in any OS. At some point I saw that the light on the USB drive was on and I thought to myself I should go ahead and turn that drive off until I was done. I got distracted and forgot to do it. (Bad move #1). Then when I was reinstalling XP, I was in the partition menu, I saw two drives with byte counts starting with “3” something. So I wiped the first one thinking it was the 36GB raptor I use for the OS. (Bad move #2) It turned out to be the friggin’ data drive. I realized it almost immediately. I breathed a sigh of relief however, since I knew I’d done an rsync between that drive and the “backup” drive about a day or two before, so there probably wasn’t any data loss. Also, I hadn’t checked the health of my backup drive before doing anything serious like installing an OS, and I don’t have any automated log monitoring set up to send me email or SMS alerts to things like drive failures. (Bad move #3).

So out of the 5 hard drives in my workstation, and the 4-5 hard drives in my LAN file server, guess which one turns out to be experiencing a total failure? Guess how I found out that the drive was failing? If you guessed that it was when I was desperately in need of a backup of the 255GB of data existing on that very 1 drive out of 9-10 hard drives I have spun up at any given time, then you guessed right.

BLERG!!!

I spent all day in a kind of semi-catatonic daze. I think I was partially in denial about it. I was also racking my brain for what I was going to do to recover it (and also trying to get a feel for the scope of my loss — which just got worse and worse the more I realized I was storing on these hard drives). I even had to go to my history class and take a test while I was waiting for fsck to stop spewing errors to my screen while running it (which may have made the situation worse for any hope of recovering anything from that drive). By the end of the night I couldn’t even mount the drive as the file system was no longer recognized as a valid Linux filesystem, I started getting IDE controller errors in the logs as well, which was a change from the earlier imagic & bad block errors I was getting when I first realized I’d just dicked myself out of all this data (and man hours collecting and organizing it).

Today I was home from work, so I got right up, got a coffee and brought my Knoppix CD to try and see what could be done. Much to my surprise I could mount the drive and it appears I was able to x-fer a few of the smaller directories over to a spare 320GB USB hard drive. While I was waiting for the transfers to finish, I got to thinking that the dd utility might be better than mounting this disk up and copying/rsyncing the files over. (Actually certain directories were unreadable and had very weird user info and perms, nothing I could do would change the perms and rsync just wasn’t dealing with the errors — as it probably shouldn’t have). Then I stumbled onto to something even better than dd (I hope) called dd_recover, which changes the block size of the x-fer on the fly to better accommodate a drive that is throwing errors. It’s a utility specifically designed for recovering data from a failing drive. So I’m very hopeful I can get something back. I have yet another 320GB USB drive hooked up and am running it. It seems to be taking a while (like 25GB x-fered in about 2-3 hours), but it’s also IDE to USB — and if it took 3 weeks and worked it’d be well worth it.

Man, Linux fucking rules. The native logging and the native tool chain that is available for troubleshooting, fixing stuff like this is unbelievable. You know what it cost for an OS that is stable and feature-laden? A single CD-R and a 699MB download. I’m sure there’s a Windows solution like Winternals, or some other 3rd party app that would do what the free dd (or dd_recover) does, but it’d cost at least $150. and have some license that times out in a year — or the app will be outdated in 6 months and you’ll have to buy it all over again. And that’s all after I had to pay for the OS, pay for the rsync-like utility that does backups, and all the other software that I’d have to pay for to do what Knoppix does out of the box. Pfft!

Anyway, back to my recovery…. I’ll post the outcome when it’s done.