Pre-existing command line tools? Those pretty much require the input file to be sorted. Diff works, but the output can be a little tricky to read. If you use unified format (with the -u switch), you can look at the + and - n the first column to see the line-by-line differences.
If the files are sorted, and you want to see only the differences, use join with the -v or -a switches.
Why do the files need to be sorted? Think about how you would perform this yourself if they are not sorted: You'd read one line from file A, and then look in all of file B for the corresponding line. This scales really badly, because for every line in one input file, you need to read the whole other input file completely end to end. If the input files typically have n lines, you'll need to read O(n^2) lines total. If the files are sorted, you only need to read every line once: Read the first line from each file. If they are the same, ignore. If they are not, output the smaller line as "missing", read another line on that file. But now you'll complain that sorting costs time too. Yes, it does, but the time required to sort each file is O(n log n), which for sizeable n is much smaller than O(n^2).
If you don't know the O() notation: It fundamentally means "proportional to", ignoring fixed constants. It's how computer scientists express the cost of algorithms very crudely. If you want more details about how to do this: There is a wonderful book by Donald Knuth, called "The Art of Computer Programming", and this problem will be found in volume 3, "Sorting and Searching".