Manually Edited Files

Is there a way to let FreeBSD list all the files I have edited manually? So that I may go and check what I have added in which file to help with future installations. A bonus would be if I could get a list of files I have added outside my home directory, since I must have added them for the system to work properly.
 
freebsd-update() has the IDS option to check install checksums on files. Alternatively, if you have a general idea of the location of files you changed, and when you changed them, you can probably use the find() command.
 
Some people track files they've modified with a source control system like svn.

A simple thing to do is when you edit a config file, add a comment with your initials to the part that has been edited. If there is a default setting, comment it out, but leave it for comparison:
/etc/sysctl.conf
Code:
# WB
vfs.usermount=1

When the time comes to look for what you have changed, grep for your initials.
 
wblock@ said:
Some people track files they've modified with a source control system like svn.

A simple thing to do is when you edit a config file, add a comment with your initials to the part that has been edited. If there is a default setting, comment it out, but leave it for comparison:
/etc/sysctl.conf
Code:
# WB
vfs.usermount=1

When the time comes to look for what you have changed, grep for your initials.
I often do that also. (The initials that is)
For the OP,
Code:
mkdir /editeds
nano -w /etc/make.conf
#then...
cp -iv /etc/make.conf /editeds
I started using that method shortly after using FreeBSD (I had been using it that
way in windows...)
 
Majorix said:
Is there a way to let FreeBSD list all the files I have edited manually?

You can check integrity of files with base utility mktree(8)
Run it first for creating original hashes of files and then run on demand for compare against hashes.
(Good to find which files got changed only)

The best option IMHO it is use DCVS
You can not only track changes that you did, but also track any changes that was done by software(or hackers).
Beside of that you don't need to propagate similar changes across machines manually since version control systems provide network access for synchronization.

Install /usr/ports/devel/git for example. Then CD to directory that you want to track and issue following commands
Code:
git init
git add .
git commit -m "Initial version."

All files and directories recursively will be remembered in repository(inside directory .git)

When you did some changes to a file(s), issue following commands:

Code:
git add .
git commit

Changed/new files will be added to a repository.

If you'd like to find which files was changed issue command
Code:
git status
before "add/commit" and it will show you all changed files

If you'd like to see what exactly was changed to compare with previous version, issue:
Code:
git diff FileName
Where is FileName is a file that you want to investigate and you will see side by side old/new changes.

Advantage of using DCVS is that you can check/track history of ALL changes(!!!)
Beside of that you can setup primary repository that you would like to distribute across multiple machines.
All you need to do then on remote computers/VPS/KVM it is
Code:
pkg_add -r git
cd to_dir_that_you_going_to_syncronize
git clone https://Path.To.Your.Repository

At least /etc and /usr/local/etc must be tracked. But you can use it on any important directories to keep track of all files.
So in case if some files will be changed without your knowledge
Code:
git status
will show it immediately and give you choice to restore any previous version of changed file(s).
It is a good idea also to run
Code:
git status
after each port addition/upgrading to find new changes and if it is ok, remember it as a new tag in file's history by issuing only two command

Code:
git add .
git commit
 
  • Thanks
Reactions: kpa
jb_fvwm2 said:
I started using that method shortly after using FreeBSD (I had been using it that way in windows...)

Code:
mkdir RCS
once and after each file changes issue:
Code:
ci -l file
if you need to track only changes in text files only and don't care about integrity.

/usr/bin/ci is in base, so no any installation needed.

use

Code:
rcsdiff SomeFile
to compares the latest revision on the default branch of the RCS file to the contents of the working SomeFile

Use
Code:
co -lx.y SomeFile   # where x.y is version of file
to restore particular previous version.


IMHO it(RCS) is better than just a copy of file, but to get full advantage(include integrity checking) of file versioning I prefer to use git
 
wblock@ said:
When the time comes to look for what you have changed, grep for your initials.

I first saw this circa 1985 in the Multiple Device Queueing System [MDQS], where Doug Gwyn was doing maintenance. He tagged his changes "DAG". To me a dag was something you found in the vicinity of the rear end of a sheep, and I thought it was a marker for "daggy" code. Well, it was, just not for the reasons I first thought...
 
If you have your own git repository (or github but you should use a private repo then) you can add that as the remote repo for the just created local repository and track changes in the repo by simply doing:

# git remote add --track master origin [email]git@mygitserver:/path/myetc.git[/email]
# git push origin master
 
At work we have something similar, we use CVS (it's been running for years) for this but the principles are the same. We simply run a script that checks out the production branch and copies the files to their proper location. This script runs periodically but can also be started manually. If you make a mistake somewhere it's easy to roll back, just tag the previous version again and the scripts will do the rest.
 
SirDice said:
At work we have something similar, we use CVS (it's been running for years) for this but the principles are the same.

IMHO it is far away from the same principles.(Git vs CVS)

CVS doesn't provide integrity checking(!!!). (So in case of damage/corruption/hacking CVS couldn't help on that)
Git on other side keeps all objects as SHA hash of content.
DCVS allows multiple ways of workflow while CVS stuck on client-server model.
CVS is the centralized repository. (If central repository would pass away - all information is lost or downtime delay will occur.)
Git can have unlimited number of central repositories, so it is much easy to implement Geo-redestribution.(Kinda CloudFlare)
Since it could be multiple root repositories - it's DDoS resistant.
Since all peers contain the whole copy of central repository - it's an automatic backup to multiple machines, so there no single point of failure because any copy is the repository.
In the same time Git can be use alone(as a single repository), just to track all changes of some directory/file(s) locally only.
Git isn't network dependent, so it is very easy to setup headless machines with help of flash drive by choosing specific branch of repo that hold particular configuration.
Beside of that, speed of Git is much faster because no network operations involved when one work with revisions.
Check-in/check-out happened practically immediately even if directories has a huge size of binary files because there no physical copy/delete operations but hard-links used instead.
Git has very efficient model of rights delegation.
Local admin don't need to ask permission to commit changes to repository, but can create own branch locally and work with it.
From other side Git still allow centralized control of primary repository, so only approved changes can be committed.

Just my 2 cents if you decide to switch to DCVS
 
Back
Top