Tag Archives: scripting

Bash Script to Merge WAV Files into a Single MP3

I record audio notes to myself and it can be difficult to go back and listen to them. This is especially frustrating because I have to be at my computer to do so. I needed to be able to take my smaller notes in a single file on my phone and listen to them while I commute to work. After doing some research online, I was able to write the following bash script to do just that:

#!/bin/bash
mkdir temp_mp3
find -size -20000k -name '*.WAV' | \
while read f;
do
  echo "Processing $f"
  lame "$f" "$f.mp3"
  echo "cat $f.mp3 > Combined.mp3"
  cat "$f".mp3 Combined-temp.mp3 > Combined.mp3
  mv Combined.mp3 Combined-temp.mp3
  mv "$f".mp3 ./temp_mp3/
done
mv Combined-temp.mp3 Combined.mp3
mp3val Combined.mp3 -f -nb

The bash script is straight-forward, but for those of you that cannot read the script:

  1. Run this script in your desired directory
  2. A list of all of the WAV files that are less than 20 megabytes is generated (using find)
  3. Each file will be changed into an MP3 (using LAME)
  4. The current MP3 will be concatenated into a single MP3 file, Combined.mp3 (using cat)
  5. After it has been concatenated, the MP3 file is moved to a directory, ./temp_mp3
  6. Once all of the applicable WAV files have been processed, the Combined-temp.mp3 is renamed to Combined.mp3
  7. An mp3val is ran on the newly named Combined.mp3 to clean up some of the ID3 header information
  8. Enjoy listening to your new mp3

This information was gathered from the following sources:

Recovering Pictures from a Cell Phone

I recently had a crash on my phone where I thought I lost the majority of my pictures. Luckily, I was able to do a recover using PhotoRec. The only problem is that for some reason, there was a glitch with the file system on my phone and each time it neared the end, the program would go backwards several thousand sectors and start recovering the same files all over again.

The next problem was that I had several duplicate files and needed to prune redundancies. Thankfully there was a program, FSlint, that was designed to do just that. Using a file size, MD5sum, and SHA1sum analysis regiment to find the duplicate files, I was able to eradicate all of the extra, unnecessary, redundant versions of each picture.

Now my problem was that I had several recoup.dir directories. I wanted to take all of the JPEGs from their respective folders and move them to a single, good folder. Searching around the Internet led to this post on LinuxQuestions.org. Their suggested command worked beautifully:

find (start directory) -iname “all my files type” -exec cp {} (target_dir) \;

Now I had all of my pictures recovered, duplicates removed, sitting in a single directory, however, their original file names and dates were incorrect. Thankfully I found this little handy script on TuxRadar that allowed me to rename the files and update their timestamp according to their embedded EXIF data:

find -name ‘*.jpg’ | while read PIC; do
DATE=$(exiftool -p ‘$DateTimeOriginal’ $PIC |
sed ‘s/[: ]//g’)
touch -t $(echo $DATE | sed ‘s/\(..$\)/\.\1/’) $PIC
mv -i $PIC $(dirname $PIC)/$DATE.jpg
done

Thanks to the wonderful world of FLOSS, I was able to complete my mission of recovering my, once lost, pictures, rename them according to their meta data, and live a happy Saturday. I love the community.