Backdoors in my OS?

Hello, I want to keep this thread free of politics and political opinions but I am a paranoid person and I do not like the idea of backdoors in my OS regardless of who is doing it. How can we be sure that there are no backdoors in FreeBSD? I also am aware that the hardware itself could have a means to spy on its user. I'm trusting that since FreeBSD is source based and all the source code (for the most part) is publicly available, that it would be very difficult to put a backdoor into the system. How can we be sure that the government (any government for that matter) is not logging everything I am doing on my computer?
 
How can we be sure that there are no backdoors in FreeBSD?
You can't. My first response would be to go over the source code yourself, but considering the amount of code I deem this to be virtually impossible.

How can we be sure that the government (any government for that matter) is not logging everything I am doing on my computer?
Simple answer: by making sure said computer isn't connected to the Internet. In FreeBSD terms: by running this command: # service netif down. Or, if you'd like to remain connected to your local network: # route del default.

But in the end there are no certainties.
 
I'm trusting that since FreeBSD is source based and all the source code (for the most part) is publicly available, that it would be very difficult to put a backdoor into the system.

Even with 100% source visibility, and no illicit code present in the source, the compiled operating system can still have a backdoor. You just have to embed the backdoor deep into the compiler or linker, as Ken Thompson famously explained back in 1984. See http://c2.com/cgi/wiki?TheKenThompsonHack for details. The source code can be 100% clean, but the binaries can't be guaranteed to be clean unless the entire build system (hardware, firmware, build software, etc) is also 100% clean.

Basically, the bottom line is that you can't really be completely certain that there isn't something illicit present. It could be hidden in the CPU microcode, system BIOS / boot ROM, any hardware with its own firmware / microcode (GPU, NIC, HBA, etc), or even embedded in your hard drive's controller. You basically have to either take a bunch of things on trust or go back to an age where you can verify the physical hardware personally and start by entering the boot code using toggle switches on a panel.

With the major *BSD and Linux projects, it would be very difficult for any form of compromise to exist for long, as they are under more or less constant scrutiny by some of the most talented people on the planet. The people involved are spread around the world sufficiently such that it is essentially impossible for a government entity to persuade them to hide something and keep quiet about it. Nothing is impossible, but it would be far easier for the NSA to secretly persuade a commercial operating system vendor to insert a little bonus feature, and far easier for it to remain hidden in a commercial OS.

For the highest levels of security / secrecy, even physically disconnecting from the network is insufficient. See the NSA / NATO "TEMPEST" specification / certification, for example. The NSA (and others) know how to monitor your systems without requiring any form of hidden code, network, or physical access, if you are of sufficient interest to them.
 
With the major *BSD and Linux projects, it would be very difficult for any form of compromise to exist for long, as they are under more or less constant scrutiny by some of the most talented people on the planet.
I have to disagree with you on that one. Remember the Debian OpenSSL disaster? The package maintainer himself deemed it necessary to apply changes to the encryption engine of OpenSSL itself yet by doing so created a major loophole. It took approx. 3 years before this massive yet still manually created bug got discovered.

If it can take 3 years to detect a flaw in the very encryption engine itself one can only imagine how long it could take for less important parts.
 
I have to disagree with you on that one. Remember the Debian OpenSSL disaster? The package maintainer himself deemed it necessary to apply changes to the encryption engine of OpenSSL itself yet by doing so created a major loophole. It took approx. 3 years before this massive yet still manually created bug got discovered.

If it can take 3 years to detect a flaw in the very encryption engine itself one can only imagine how long it could take for less important parts.

Ok, perhaps "should be very difficult" would be a better description. It still should be more likely to get caught than with closed source / proprietary code, as the bad change is there for all to see without trying to analyse disassembly of highly complex code.

That example is an excellent demonstration of why all of the security-critical code needs a thorough automated test suite, particularly for anything related to crypto, and why independent code review by many skilled eyes is critical. With closed source, introduction of the same style of bug is probably equally likely, especially when long term support is carried out by less experienced teams (or a team without the necessary deep level specialist skills) or the lowest outsourcing bid.
 
I want to keep this thread free of politics and political opinions but I am a paranoid person
First I want to make rightous clear, that my response is not meant to insult th OP. My intention is making the reader think about choosing the appropriate words when talking about IT-security.

Is it appropriate to call yourself being "a paranoid person"? What happens, if you say such sentences outside of the IT-context?

https://en.wikipedia.org/wiki/Paranoid_personality_disorder said:
Paranoid personality disorder (PPD) is a mental disorder characterized by paranoia and a pervasive, long-standing suspiciousness and generalized mistrust of others. Individuals with this personality disorder may be hypersensitive, easily insulted, and habitually relate to the world by vigilant scanning of the environment for clues or suggestions that may validate their fears or biases. Paranoid individuals are eager observers. They think they are in danger and look for signs and threats of that danger, potentially not appreciating other evidence.

They tend to be guarded and suspicious and have quite constricted emotional lives. Their reduced capacity for meaningful emotional involvement and the general pattern of isolated withdrawal often lend a quality of schizoid isolation to their life experience. People with PPD may have a tendency to bear grudges, suspiciousness, tendency to interpret others' actions as hostile, persistent tendency to self-reference, or a tenacious sense of personal right. Patients with this disorder can also have significant comorbidity with other personality disorders.

So why are some IT-persons still so eager calling themselves "paranoid"? A paranoid person is not seen trustworthy by most of the others, and conspiracy theories are right around the next corner.

The year 2013 marks the beginning of the "Post-Snowden-Era". After 2013 the term "paranoid" has been under review and should be reserved for those needing a professional therapy.

Should an IT-professional call himself "paranoid"? Does that person attract a positive attention when doing so? Is that really cool? Does this advocate competence? Probably not, except in some unserious environments.

So let's tag the word "paranoid" in IT-context as obsolete. It's a remnant term of the Pre-Snowden-Era, created by those who had an interest labelling some IT-people as suspect and mentally ill. Aren't those times gone?

We should take our time to think about using appropriate terms that are not that ambivalent and hurt our own reputation.

When implementing security in IT-environments that are less vulnerable to attacks by strong adversaries, there is no need to "be paranoid". Instead careful risk analysis are needed, taylored to the strength of the adversaries one have to cope with.

Having said all this, think about research on side-channel attacks or trying to defend such attacks. Do we really still need the buzzword "paranoid" or do we can better?
 
but I am a paranoid person and I do not like the idea of backdoors in my OS regardless of who is doing it.

I feel your pain.

But alas I have to agree with ShelLuser in that you simply can't avoid the possibility of backdoors. Even if you could go over all the OS code, that doesn't guarantee you are free from "backdoors". Your machine probably has some firmware (BIOS etc..) that you cannot see the code for. You could of course compile some of the free/open source loaders (like coreboot), but that would of course mean trusting your compiler and firmware as explained by Murph. This leads to a case of it being turtles all the way down since you cant really compile clean firmware (or an OS for that matter) without relying on some other possibly suspect firmware.

Simple answer: by making sure said computer isn't connected to the Internet. In FreeBSD terms: by running this command: # service netif down. Or, if you'd like to remain connected to your local network: # route del default.

If you can't trust your software, why not at least stop it from talking to anyone? A mighty good idea. But I'm skeptical that you can really do that since you can't trust your hardware (and my transitive property, your software) :rolleyes:

Just because you turned your network interface off from within the OS, are you sure its really off? The network interface controller (NIC) is after all a piece of hardware that's often built right in to the motherboard. As Murph mentioned, it has its own firmware. Maybe when you turn it off, it just stops talking to the OS and keeps broadcasting. I have checked the 2.4 and 5 GHz bands and determined the NICs on my boxes do in fact stop broadcasting on standard WiFi bands when "off", but I lack the time and equipment to verify that its not gossiping about me on other bands. And yes it could possibly broadcast on other bands. Although not an advertised capability, have you actually taken an electron microscope to the NIC's integrated circuits and verified that it can only operate on those two bands? It wouldn't be that hard for No Such Agency to pay a chip manufacturer to add an "extra feature" to their chips, and its very hard to verify the integrity of hardware in the age of ICs.

Of course you could do as I did with my "crypto box". I actually physically took the NIC out (older boxes often have the NIC attached to the bus rather than the mobo and thus it can be removed easily) along with anything else that could possibly broadcast a signal. Or wait. Did I? I did mention I can't actually verified the integrity of the rest of the hardware. Maybe my 1 gig ram chip has a small antennae and micro controller in it that can broadcast the contents of my ram at will. Or maybe my hard disk. Or maybe even my CPU. Even if I did have thousands of years and an electron microscope to verify that all the ICs are doing as they're supposed to, and found that all the hardware was in fact trustworthy, hardware is after all, circuitry. Any wire with varying current going through it will produce an electromagnetic signal... Your CPU produces a weakly detectable signal at the band being equal to the clock rate. I suspect that it would be possible to tell precisely what op code was executed by the amplitude of the signal each cycle since each op code has a different set of transistors being flipped to the "on" state. And there's no shortage of devices Big Brother could use to listen to this signal. Your naive girlfriend's iPhone... those danged Google cars... Your WPA "secured" wifi router... The possibilities are endless...

How's your paranoia now? :D

What would probably work is faraday caging whatever area you use the computer in. This is somewhat common practice in high security government buildings to stop their air-gapped networks from being eaves-dropped on. Of course most people, including me, are unable/unwilling to turn a room of their residence into a copper clad dungeon... You also would need to do something about the "oversized rats" that might rudely infest your faraday cage room while you step out, without so much the courtesy of leaving you a search warrant.

You could *just* get/build yourself a PDP-8. Big brother would have a very difficult time backdooring a system made from discrete transistors without you noticing it. Alas 12 bit 1960s DEC architecture is not supported by FreeBSD...

Or you could just accept we live in a surveillance state and dream of retiring early in Fiji, Vanuatu, or anywhere else that has nice, off-the-grid tropical islands.
 
Don't forget that many motherboards are made in China even iPhone. Network routers such as Tenda, Netcore, TP-Link, Huawei and couple others were accused or caught having backdoors in their firmware.

It's pointless to argue about FreeBSD having a backdoor while routers, motherboards, BIOS, WiFi cards, ethernet cards could have one. If you really want absolutely 100% without a backdoor then you'll have to build everything yourself including OS, hardware, firmware, etc. and stay off the internet.

FreeBSD community do try their best to find a backdoor but there is no such thing as 100% guarantee.
 
When I am using a Windows machine, I tend to do this:

https://www.ibm.com/developerworks/...om_having_full_access_to_the_internet?lang=en

Not because I care about spying as such but mostly because I don't like having my computer constantly doing random crap like updates. I prefer to keep it deterministic as much as I can.

The same could be done for FreeBSD if you do not trust it. You could also use some simpler, open-source or more transparent hardware to run the proxy (perhaps beaglebone, arduino or rpi?) rather than via a VM.
 
The only difference between Windows and FreeBSD is open and closed source codes, Windows does a lot of random craps and FreeBSD doesn't. In fact, you can easily turn off jobs or startups in FreeBSD but you cannot with Windows. I would say FreeBSD is way more secure than you think because FreeBSD is open-source for anyone to investigate. If someone was to slip in a backdoor code and it will eventually be found or blocked at review or audit stages. There are programs that can scan the source code for backdoor keywords, monitor the ports for any unusual activities and audit to find the difference between older and newer codes. It will be found regardless if anyone tried to insert a backdoor code.

Few years ago, someone claimed that FBI hired OpenBSD developer to insert a backdoor in OpenBSD but it was unsuccessful supposedly because of review and audit stages. Another attempt was done on ProFTPd with fake checksum which is hard to do. It's difficult to bypass security checksums, signatures and audits. Every sources that are committed are subject to several stages of reviews and audits before being accepted. It's not like you can write a program, submit it and it will be accepted immediately without going through stages of review and audit. If I wanted, I could run a DIFF on all sources between 10.2 and 10.3 to see what's changed without having to investigate all source codes and it can be found.

If someone were to insert a backdoor in FreeBSD, they'll have to be really clever to do it without getting caught and that's hard to do with many paid and volunteer FreeBSD developers and 10 million programmers worldwide. Someone will notice.

With Windows and Mac OS X, you're at their mercy because there is no public oversight to investigate and their source codes are closed to the public. They can insert whatever craps in their OS without you knowing.

Red Star OS (Linux) got massive backdoor and surveillance craps in it and its only government approved OS in North Korea. I would not recommend you to install it on your computer but if you want to play with it, disconnect the internet first. I mean literally disconnect the ethernet cable, turn off your router, call your ISP to cancel the service or as a last extreme resort use an AXE to cut up the ethernet cable. Seriously, Red Star OS is really evil OS created by incarnate late leader Kim Jong.
 
I have no intention of starting a whole discussion here, one which is even kind of offtopic, but having said that:

In fact, you can easily turn off jobs or startups in FreeBSD but you cannot with Windows.
That is actually incorrect. Check out msconfig.exe for starters (you can simply start it, it'll probably asks for elevated permissions). This allows you to fully control Windows' boot process. Another option is services.exe (or services.msc) which, when run with administrative permissions, allows you to start and stop all the registered services.
 
That is actually incorrect. Check out msconfig.exe for starters (you can simply start it, it'll probably asks for elevated permissions). This allows you to fully control Windows' boot process. Another option is services.exe (or services.msc) which, when run with administrative permissions, allows you to start and stop all the registered services.

You can but how many people know this? Not many.

Windows have too many services with obfuscated names. Turning off the wrong service will break the system and the same could be said for FreeBSD too.

Anyway, most FreeBSD users know UNIX very well. Most Windows users are novice and they don't know what services A or B does. That's why Windows does the updates automatically while FreeBSD users have to perform the updates manually. Windows is designed for people who don't want to work under the hood, use CLI or perform complicated tasks.

Anyway, you get my idea.
 
When I am using a Windows machine, I tend to do this:

https://www.ibm.com/developerworks/...om_having_full_access_to_the_internet?lang=en

Not because I care about spying as such but mostly because I don't like having my computer constantly doing random crap like updates. I prefer to keep it deterministic as much as I can.

The same could be done for FreeBSD if you do not trust it. You could also use some simpler, open-source or more transparent hardware to run the proxy (perhaps beaglebone, arduino or rpi?) rather than via a VM.

I agree with all of the above. I hate auto-updates. Linux distros seem to be pushing them now, just like MS, and making them the default SOP.

I like the proxy idea, but not by using conventional tools. A proxy software archive would be towards the top of the list, in terms of having exploit affinity. A completely homegrown proxy system could befuddle system attackers. You could run a raspberry pi or odroid with 2400 MHz tranceivers (the kind that you use for wireless serial comm links) - and then run private key encryption on them. The homegrown links could use homegrown software so there would be no known exploit profiles to work from.

But, as was said, in the end there is no infallibility. Some processors are coming armed with internal transmitters these days, and the Pi2 has a programmable frequency generator for output on any frequency up to 250 MHz, which might be good enough for eavesdropping if the right software (I should say wrong software!) were to be inserted into the system by exploiters.

In the end, pull all the connectivity, wired and wireless, and do the sneaker net for transfers. Then, worry about processor transmitters in your sleep.
 
How can we be sure that the government (any government for that matter) is not logging everything I am doing on my computer?

I would say to activate/configure any of the available FreeBSD firewalls to your licking (note that some packets go through it like butter, I've personally seen it), and add another firewall device (different brand/vendor too) with a logging feature to isolate your Internet router. Firewall should be configured as DENY ALL except the specific traffic that you need, ultimately, only to servers that you contact. It's better than nothing.

Also, are you using a web browser? If so, not a good idea... The crap stored in cookies and the javascript code being executed right there... Look for secure web browsing topics too. ;-)

For the highest levels of security / secrecy, even physically disconnecting from the network is insufficient. See the NSA / NATO "TEMPEST" specification / certification, for example. The NSA (and others) know how to monitor your systems without requiring any form of hidden code, network, or physical access, if you are of sufficient interest to them.

Oh! That one is nasty! This is way beyond cracking the WiFi password...

Dominique.
 
My servers are behind pfsense server so pretty much everything is blocked except the required ports for web servers.
 
I generally advocate for FreeBSD and feel it is pretty "safe". However, nobody uses it for security in the paranoid sense mentioned here. For that it is probably better to chose an OS which is configured for that purpose specifically. I'd use Tails as a first choice if it was really important. It runs fine in VirtualBox on FreeBSD too. As a good compromise for a persistent install, I run a separate machine with a plain and patched Debian which I use for Torbrowser. I'm not paranoid enough to consider this as a must though. I just do it to learn and provide a small amount of resistance to the way things seem to be going.
 
I'd use Tails as a first choice if it was really important.
I think due to the sheer complexity of Linux and "wild west" style of development, the Tails distro can only do so much. They surely don't audit the entire software stack that it is based upon. Therefore I think FreeBSD could potentially be a better choice because it is simpler but better yet, OpenBSD because that is audited for security relatively often.

Though a live CD is a good idea because it can be reset back to a known state very quickly :)
 
From everything I read it would seem that the use of Tails makes you a target.
http://www.infoworld.com/article/28...g-linux-users-for-increased-surveillance.html

"the NSA's intention here was to separate the sheep from the goats -- to split the entire population of the Internet into "people who have the technical know-how to be private" and "people who don't" and then capture all the communications from the first group."

You can bet every word said here is scraped.
 
You can bet every word said here is scraped.
Well, with the nature of the discussion in this thread and the mention of their TEMPEST stuff, it's fairly likely to trip one of their filters. Of course, if you really want to make sure, editors/emacs has a function for that ( M-x spook):
Uzi DHS NOCS CIS Biological event VIP Protection USDOJ Airport SGC
National security Axis of Evil PFS Tuberculosis primacord NIMA
:p *wave to the good people at Ft. Meade*
 
Backdoors or not... I think there's one thing which is very important yet gets overlooked way too easily: good security starts by gaining a good understanding of the OS you're using. I don't care if that's Windows, Linux or our beloved FreeBSD. If you don't have a good understanding of what's going and how things work then you're creating a liability.

Please note: there's nothing between the lines here, I'm not insinuating that people don't know their environments.

But I do believe that if you have a concern for your servers security then you shouldn't start focusing on possible backdoors which might be theoretically possible, but on getting to know your operating system instead. Example: the ports collection downloads the official source code, then applies a patch, then processes it. So it helps to be familiar with # make patch: this allows you to check out the source tree of the port after the patches have been applied.

Speaking of which... This is actually one of reasons why all of my FreeBSD environments (on all servers and even my laptop) got compiled from the source tree (http://svn0.eu.freebsd.org/base/releng/10.3) is what I use). First and foremost because this gives me full control over my system (no wireless tools on my servers, but also no ZFS tools on my UFS based laptop) but also because I can look up every thing in the source tree itself.

If I wonder what makes /usr/bin/yes tick then I can look it up: /usr/src/usr.bin/yes/yes.c. And that can help as well. That can also help to get a better security. Obviously yes is pretty harmless, but now lets try focusing on sockstat or netstat.
 
good security starts by gaining a good understanding of the OS you're using.

And knowing what kind of security actually matters to you. A threat model is absolutely essential or you will either miss out on important things or waste your time on stuff that doesn't matter in your situation. For example, many server operators really don't care about the NSA but only about keeping the servers from being compromised and/or going down. Replacing a theoretically compromised BIOS to avoid theoretical vulnerabilities that only three letter agencies can afford to exploit does not make business sense in most cases. Doing destructive RAM testing to find a type that is resistant to rowhammer attack, even less so. Most people have much simpler needs, and it is worth defining them.
 
I have had a think about this and wonder if the following could potentially be a product in the future.

Basically you plug in some small device via usb into an untrusted OS like Windows and install the (open-source) drivers.

What this device and drivers provide is a complete network stack (separate from that provided by the OS). This stack is pretty useless on its own (all the software on Windows cannot use it for example). Then we also provide some userland software such as a web browser specifically designed to use our 3rd party networking stack. Another userland example could be a proxy so that trusted software running on windows could also be configured to transparently use the 3rd party network stack.

Either way, those pesky Windows updates should fail to connect, likewise other software like spyware, drm etc...

I wonder if we could forgo the usb device completely and hook into the actual networking hardware and redirect it from using the Windows network stack to our custom implementation, similar to how a VM does it.
 
Basically you plug in some small device via usb into an untrusted OS like Windows and install the (open-source) drivers.
Anything and everything you install on an untrusted host will be untrusted too. What if the kernel itself has been hooked? Your custom network code will simply receive subverted data and it has no way to tell the difference.

I wonder if we could forgo the usb device completely and hook into the actual networking hardware and redirect it from using the Windows network stack to our custom implementation, similar to how a VM does it.
Some advanced malware works exactly the same way too :p

It reminds me of the Lamer Exterminator virus on the Amiga. It hooked directly in the code that reads from disk. Any action that read the boot sector simply received a bogus default boot sector. In reality the boot sector contained the Lamer Exterminator virus but there was no way to detect this when it was active.
 
Amiga was quite different case because it had no memory management that would have kept user space programs and kernel memory space separate. In hindsight the design was just asking for trouble because it allowed any program to modify the OS internals as they wished.
 
Back
Top