AUTHOR: Joseph M. Dupré <dupre avab com>

DATE: 2005-08-13

LICENSE: GNU Free Documentation License

SYNOPSIS: How to build a LFS 6.1 system using the LFS live CD as your
host OS.

DESCRIPTION:
This hint should guide you through the process of building a LFS
system using the LFS live CD as the host operating system.  You use
the LFS live CD to boot the computer instead of loading a pre-
packaged linux distribution like Debian or Red Hat.  This is the
"cleanest" way to build a LFS system, since inconsistancies and
idiosyncracies of the various distributions are removed.

ATTACHMENTS:
[none]

PREREQUISITES:
The prerequisists are the same as that of LFS.  You will also need to
have the LFS live CD for the version of LFS that you intend to build.
 I also assume that you know how to create and edit a text file with
an editor, rather than using redirected cat or echo commands.  This
is not a hint on using the live CD, so any problems you encounter
using or booting your system with the live CD should be directed to
the LFS live CD mailing list.

This hint was written for LFS 6.1.  Due to the rapidly changing
nature of linux, some of the scripts describe herein may break in
future versions of LFS.  If you understand what the scripts are doing
and why, it should not be very difficult to adapt the scripts to
match future versions of LFS.

HINT:
It was suggested that this hint could be reduced to "follow the
book".  Well, almost.  You could also use the ALFS profiles which are
included on the Live CD.  But that is not the point.  This hint
points out the difference of building LFS when booted up from a "Live
CD", specifically the LFS Live CD, as opposed to a pre-installed
linux distribution.  This hint also provides a couple of scripts that
will make your job easier.

So yes, please follow the procedures in the LFS book.  When something
differs when using the live cd environment, it will be pointed below
by chapter number.

Although not specific to this hint, I do want to point out the
following since I assume that most people who will be reading this
are building LFS for the first time:  When following the LFS book
note the difference between the "back-tick" [ ` ] and the single
quote [ ' ].  On a US keyboard the back-tick is on the key with the [
~ ] and the single-quote is on the key with the [ " ].  Also,
capitalization matters.  If you specify a configure or make option in
the wrong case, you may not get any warning and the build may fail or
exhibit problems later.  A wrong charater is a "sed" command can
trash files.  Think about what each command does before you hit the
enter key.  If you don't understand what is happening, look it up
using the man pages.  I learned more about Linux in general by
building my first LFS system then I did in several years of
administering a popular distribution.

In the command examples shown below the leading # is a prompt.  Do
not type it.  However, in the script examples, the leading #
indicates a line that is a comment.  In this case you must type the #
within the script, or ignore the line completely.

So, begin at page one of the LFS book.  When you get to the chapters
noted below, do what it says here...

2.2
By now you should have booted your system using the LFS live CD.  All
the tools for creating a partition and creating a filesystem are
included in the live CD's operating system.

3.1
All the source packages and patches should be on the live CD in
/sources.  You don't have to copy them all once to your lfs partition
if you do not have the space, but it makes life easier.
# cp -R /sources $LFS

4.2
Since you create a link on the root of the live CD's filesystem, it
will be gone the next time you boot.  We will add this link to a
startup script later.

4.3
Since the OS loaded from the CD can not be permanently damaged, there
is no reason to create a lfs user for safety.  Ignore this chapter.

4.4
Read chapter 4.4, but don't do anything yet.  The problem with using
a live CD to build your system is that everytime you reboot the
system, any special settings you have made to your user environment
are lost.  You can't easily make a lfs user and a special build
environment by editing the user's .bash_profile and .bashrc files as
specified in this chapter.  So instead, we will write a script that
sets up a "clean" environment.  This script will have to be run each
time you leave the special build environment or re-boot the system.

The only place you have to store any settings that will remain the
next time you boot the live CD is the partion you created for your
LFS build.  Although there may well be better solutions to this, this
is the method I used when building LFS 6.1.  I am assuming that you
place all of the scripts mentioned in this hint in the root of your
LFS partition.

cd to $LFS and create the following script using your favorite text
editor (provided that your favorite text editor is included on the
live cd).  At the time of writing, you may use vim, nano or joe.

#################################################################
# Begin setenv.sh
# Sets up clean environment for 6.1 Chapter 5
#
# Before this script is run, you must have executed the following
# after booting the live CD:
#
# export LFS=/mnt/lfs
# mkdir $LFS
# mount /dev/xxx $LFS	(where xxx is your lfs partition)

# Create the link to tools as per Ch 4.2
ln -s $LFS/tools /

# Set up the environment as per chapter 4.4
# This sets up a special environment for root based
# on the procedure for the user "lfs" which isn't
# necessary when running from the live CD.

echo "set +h" > ~/.bashrc
echo "LFS=/mnt/lfs" >> ~/.bashrc
echo "LC_ALL=POSIX" >> ~/.bashrc
echo "PATH=/tools/bin:/bin:/usr/bin" >> ~/.bashrc
echo "export LC_ALL PATH" >> ~/.bashrc

# This sets up key mapping so the delete key works:
cp /etc/inputrc ~/.inputrc

# Enter the special build environment
exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash

# End setenv.sh
###################################################################

And now make the script file executable and run it:
# chmod 700 setenv.sh
# ./setenv.sh

Note the difference between the live CD shell prompt and the special
"build" environment prompt we have created here.  You will be able to
easily see which environment you are in.

So, to recap, every time you reboot the system you will have to
execute the following commands to mount your LFS partition:

# export LFS=/mnt/lfs
# mkdir $LFS
# mount /dev/[xxx] $LFS

(Replace [xxx] with the partition that you have created for your LFS
installation.)

Then, run the setenv script to set up a "clean" environment:
# $LFS/setenv.sh

Now you can continue on with chapter 5...

When you are done with chapter 5, you may want to make a backup of
this partition using "Ghost" or some other backup tool.  (Just in
case something goes wrong in the next chapter...)  It could also be
used for a future build, or on another machine of the same
architecture.

Turn off your machine and take a break.  At least type "exit" to
leave the temporary build environment.

Chapter 6
For the procedures in chapter 6 we will no longer be using the
setenv.sh script to set up the build environment. Instead we "chroot"
into the $LFS directory.  This will require running a script to enter
the chroot environment, and then another script to configure the
temporaty filesystems and devices within that environment.

If you are still in the environment set up by setenv.sh, type exit to
return to the live CD root prompt.  Or to be safe, reboot the live
CD.


6.2
Create the $LFS/proc and $LFS/sys directories, but do not mount the
file systems.  We will mount the filesystems in the lfschroot.sh
script below.

6.3
It is assumed that during the build process you may want to take a
break and turn off your machine to save energy.  Rather than retyping
all the commands required to chroot when you come back, we will put
them all in the following script.

################################################################
# Begin lfschroot.sh
# Changes root directory for use in LFS 6.1 Chapter 6

# Mount required filesystems
mount -t proc proc $LFS/proc
mount -t sysfs sysfs $LFS/sys

# These "fake mounts" are also needed now
mount -f -t ramfs ramfs $LFS/dev
mount -f -t tmpfs tmpfs $LFS/dev/shm
mount -f -t devpts -o gid=4,mode=620 devpts $LFS/dev/pts

echo ""
echo "Do not forget to populate /dev !"
echo ""

# Chroot into the LFS system with a reduced environment
chroot "$LFS" /tools/bin/env -i \
	HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
	PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin \
	/tools/bin/bash --login +h

# End lfschroot.sh
###################################################################

Make the script executable with this command.
# chmod 700 lfschroot.sh

Enter the lfs chroot environment:
# ./lfschroot.sh

Ignore the reminder about populating /dev.  I will cover that in
chapter 6.8

Once in the chroot environment a lot of things will be "broken".  You
no longer have access to the text editors or many of the other tools
from the live CD.  In fact, all you have to work with is what you
have installed in chapter 5.

6.4
Since we skipped creating the lfs user, you can skip this too.  Note
that when in the chroot environment the files are allready owned by
0:0, whereas outside the in the live CD environment that are owned by
root:root.  This is because the files are never really owned by
"root", they are owned by user 0 group 0.  A "normal" OS translates
the user and group numbers to the user and group names specified in
/etc/passwd and /etc/group.  Since we have no passwd or group files,
"I have no name!"

6.5 - 6.7
[Optional] I hate typing a bunch of stuff on the command line.  I
like to make a script file so I can edit it and run it later if
necessary.  For reasons mentioned above it is a bit difficult to do
that now in the chroot environment.  You may want to consider exiting
now and creating scripts to perform the commands for chapters 6.5
through 6.7.  (Or think about learning ALFS!)

6.8
Make the two nodes as decribed in 6.8.1


6.8.2
We are going to make a script for this, so that we can populate /dev
at will while building the rest of chapter 6.  Exit the chroot
environment and create the following script in your $LFS directory to
load the device nodes.

#########################################################
# Begin devpop.sh
# Based on LFS 6.1 Chapter 6.8.2
# This mounts a tempfs to /dev and populates
# the /dev directories with a minimal set of device nodes.
# ONLY RUN AFTER YOU HAVE CHROOTed INTO LFS DIRECTORY!!!

mount -n -t tmpfs none /dev

mknod -m 622 /dev/console c 5 1
mknod -m 666 /dev/null c 1 3
mknod -m 666 /dev/zero c 1 5
mknod -m 666 /dev/ptmx c 5 2
mknod -m 666 /dev/tty c 5 0
mknod -m 444 /dev/random c 1 8
mknod -m 444 /dev/urandom c 1 9
chown root:tty /dev/{console,ptmx,tty}

# These symbolic links are required by LFS
ln -s /proc/self/fd /dev/fd
ln -s /proc/self/fd/0 /dev/stdin
ln -s /proc/self/fd/1 /dev/stdout
ln -s /proc/self/fd/2 /dev/stderr
ln -s /proc/kcore /dev/core

# Mount the kernel filesystems within /dev
mkdir /dev/pts
mkdir /dev/shm
mount -t devpts -o gid=4,mode=620 none /dev/pts
mount -t tmpfs none /dev/shm

# Test to see if udev is built yet.  If so, run it.
if [ -f /sbin/usevstart ]; then
	/sbin/udevstart
fi

# End devpop.sh
#################################################################


So, for the rest of chapter 6, anytime you reboot, you must run the
following commands:

# export LFS=/mnt/lfs
# mkdir $LFS
# mount /dev/[xxx] $LFS
# $LFS/lfschroot.sh
# ./devpop.sh

(Replace [xxx] with the partition that you have created for your LFS
installation.)

If you don't reboot, but only leave the build environment (with the
exit command), you may re-enter the chroot environment by running the
lfschroot and devpop scripts.  Note that you will receive some
warnings, since the scripts try to mount filesystems that are
allready mounted.  These are safe to ignore.

So, you can now continue by installing the packages in chapter 6...

6.22
Now that readline is installed, you can make the delete key work if
you want.  Just create the files specified in chapters 7.8 & 7.9.
(You will have to exit and chroot again to have the changes take
effect.)

6.61
If you followed the instructions, and you have not exited the build
enviromnent since Chapter 6.37 (Bash), you will be running the newly
installed bash executable.  If so, exit the build environment and
reenter it as usual:
# exit
# $LFS/lfschroot.sh

Now you will be running the version of bash from the live CD.
Procede with stripping if desired.

6.62
If desired, remove the /tools directory.  If you do this, you must
edit lfschroot.sh to remove the references to /tools from the chroot
command.  And you can remove the +h bash option. Edit the chroot
command to look like this:

# Chroot into the LFS system with a reduced environment for Chapter 7
chroot "$LFS" /usr/bin/env -i \
	HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
	PATH=/bin:/usr/bin:/sbin:/usr/sbin \
	/bin/bash --login


EPILOGUE:
So that's all there is to it.

[ This hint was tested with LFS 6.1, using LFS Live CD version x86-
6.1-2, the en-US-ISO8859-1 locale, and the US keymap on a Dell sc420
(i686) platform.  An ext3 boot partition with grub-0.95 was allready
in existance created by Fedora Core 4, from which I had previously
removed the extended attributes.  LFS was installed on a 2GB ext2
partition. ]


ACKNOWLEDGEMENTS:
  * Obviously, the creators and maintainers of LFS and the LFS Live
CD.


CHANGELOG:
[2005-08-19]
  * Initial hint.