Solved C++ / Qt cross platform development

Hello,

I am planning to write a C++ application that is using Qt5. As ar as I understood: with using qmake and afterwards make, it should be fairly portable across *BSD and Linux systems. (eventually, falling back to gmake).

Now, I want to do some tasks that differ across the *BSD and Linux systems and I thought about adding one class for each operating system (i.e. I inherit from a base class that encapsulates platform independent code but then, when something is specific on a particular system, I create a separate class for that system).

My question is: how should one do that? What is the simplest way to achieve cross platform in-dependency with C++/Qt5?

Furthermore, how would I control which linker flags (in particular which additional libraries) should be used?

The only thing that comes to my mind are #ifdef but perhaps, there are smarter ways than that?
Maybe there is some Qt library wrapper that does handle all that for me (not sure though). Any hints, case studies or tip's?

Thanks!
 
You might want to look into the "pimpl" C++ idiom. It allows you to keep a weaker bind between your structure and implementation. For example you can tell the build system to use WebsocketAdapter.cpp instead of PosixSocketAdapter.cpp whilst the corresponding header and how it connects with the rest of the program can remain identical. It avoids #ifdefs in the header basically, i.e between SOCKET on Windows and int (fd) on everything else. This is because it allows you to have a "public" header with next to zero dependencies and then a "private" header containing platform specific knowledge.

One word of warning... Try to compile and run a Qt 3 application on a modern system. You will likely run into many many issues. You might then want to re-evaluate whether Qt is portable or not. Personally I recommend wxWidgets. Not because it is nice to use but because maintainability (a form of portability) is actually feasible / possible.
 
  • Thanks
Reactions: bch
Can you be more specific on where, and why, you plan to use different code for Linux and FreeBSD? Give a handful of examples, perhaps?
 
Thanks for your replies!

Can you be more specific on where, and why, you plan to use different code for Linux and FreeBSD? Give a handful of examples, perhaps?

Sure: say, i am writing an application that examines the x86-64 pagetable. On Linux, I can do this by reading /proc/[pid]/pagemap (to some extent). The same file does not exist in FreeBSD. One way to do it in FreeBSD would be to read /dev/kmem (kvm(3) interface). I haven't checked NetBSD yet but I think they have a similar facility like FreeBSD.

So, I am thinking about functionality that is deeply tied to the operating system. Hope it makes more sense yet.

Thanks for the pimpl hint. I haven't looked at it yet but will check it out later.
 
When I had to support multiple operating systems (a small & simple agent to collect performance & utilisation metrics, compatible with Windows, BSD and Linux), I made a simple API, covering a cherry-picked subset of available metrics. For example, all three operating systems view CPU utilisation in somewhat different way: with Windows there are 4 or 5 types of CPU metrics: idle, user, privileged and two kinds, roughly matching to interrupts and delayed interrupts; on BSD they are also 5, but instead of delayed/soft interrupts you have nice; on Linux there are like 9 CPU metrics or so (varies with kernel versions). Same for memory utilisation: different operating systems represent the memory use in a different way, and sometimes metrics are sub- or super-sets of other metrics (or you sum all parts and get the total on one platform, but on other places some metrics are partially or fully included in other metrics).

What I did was, I declared several structs, one per component, with respective members for each metric I cared for, and designed few APIs to fill those structs. Example:
C:
struct CpuMetrics {
    unsigned user, nice, sys, idle, iowait, irq, softirq;
};
int query_cpu_metrics(struct CpuMetrics *m);
struct MemoryMetrics {
    unsigned long free, active, inactive, cache, buffers;
};
int query_memory_metrics(struct MemoryMetrics *m);

Then I wrote a separate source file handling each platform/operating system, and include this source in the respective Makefile. Platform-specific implementation hides the details and converts all metrics to chosen common representation -- the structs above. This is simplified, of course, but I hope the idea is clear.
C:
/* os_windows.c */
int query_cpu_metrics(struct CpuMetrics *m) {
    /* ... uses various Performance Counters APIs to gather information and fill |m| */
}
/* os_linux.c */
int query_cpu_metrics(struct CpuMetrics *m) {
    /* ... parses various /proc entries to gather information and fill |m| */
}
/* os_freebsd.c */
int query_cpu_metrics(struct CpuMetrics *m) {
    /* ... uses various sysctl() to gather information and fill |m| */
}

You can also use base classes, AKA interfaces, but I see no point in that, given you won't likely use several implementations/concrete classes simultaneously. Basically you can think of this as a driver code, one for each of the supported operating systems, but in any case only one such driver can be active at a time, and you can select which one during compilation.
 
  • Thanks
Reactions: bch
Just echoing Bobi B.: don’t add complexity for no reason. You could even have one .cpp with the actual implementations of the same class for all your supported OSes, with ifdef-s selecting the appropriate-for-the-architecture code sections. This maximizes your code reuse, too.
 
Thanks Bobi B. Yes, your example is somewhat similar what I have in mind. Perhaps, dealing with it in the Makefile is the most simplest thing here (and I don't need a lot of #ifdef's inside the code). Do you have a small example how you would do this with Makefile's?

One other idea I came up with was using ld.so and dynamically load a specific library with a set of functions that implement the necessary functionality. The rest of the code is platform independent. It's basically a similar idea I think.
 
Dynamic loading is handy when you want to avoid early binding/static linking, or when you wish to support plugins/modules. For example I used dlopen(3) and dlsym(3) to be able to read Nvidia CUDA performance counters on Linux, but monitoring agent will run even when Nvidia runtime is not available/installed.

You can load "the driver" dynamically, but what would the benefits be? Even in that case you'll have to design your Makefile to build the right .so, depending on host platform.

I tend to write Makefiles in two ways: either 1) a separate BSDmakefile and GNUmakefile taking advantage of the fact, that the respective make utility will use "their" Makefile; or 2) write a GNU Makefile that does runtime tests, detects host OS, and toggles various flags, defines pre-processor variables, includes and excludes sources, etc. With some effort you can write a single POSIX Makefile that will work with either BSD and GNU make (did that, as well, for simpler targets). The first case can benefit of the fact, that FreeBSD provides... don't know how to name them... um Makefile templates, like for when building programs, libraries or kernel modules (they set a handful of variables and .include <bsd.prog.mk>, for example; check Makefiles in /usr/src on your FreeBSD box).

Example for the 2nd case:
Makefile:
ifeq ($(TARGETOS),)
  ifeq ($(OS),Windows_NT)
    TARGETOS = Windows
  else # detect OS (Linux/FreeBSD/MacOS)
    TARGETOS = $(shell uname)
  endif
endif
ifeq ($(TARGETOS),Windows)
  SOURCES += os_windows.c
endif
ifeq ($(TARGETOS),Linux)
  SOURCES += os_linux.c
  CFLAGS += -D_POSIX_C_SOURCE=200809L
endif
ifeq ($(TARGETOS),FreeBSD)
  SOURCES += os_freebsd.c
endif
 
  • Thanks
Reactions: bch
Thanks for sharing that with me! I think I will try a similar way with using make to configure it during compile time. Would any recomment automake in such a scenario?

(i personally have no experience with that and I am not sure at which point it makes sense to use/learn it? I will certainly try to keep it simple and understandable -- it's more or less planned as a proof of concept anyway at the moment).
 
Again, if you're really trying to keep it simple, I fail to see how using the predefined-for-you (either the QT flavors or the generic OS ones like __linux__) macros to check what OS is being compiled for and #ifdef sections is more complicated than what you're doing. You could have (in one c file, os_impl.c) that always gets compiled (no fiddling with which sources to include based on platform) with the correct bits for the OS.

C:
#if   defined(Q_OS_WIN)
int query_cpu_metrics(struct CpuMetrics *m) {
    /* ... uses various Performance Counters APIs to gather information and fill |m| */
}
#elif defined(Q_OS_LINUX)
int query_cpu_metrics(struct CpuMetrics *m) {
    /* ... parses various /proc entries to gather information and fill |m| */
}
#elif defined(Q_OS_FREEBSD)
int query_cpu_metrics(struct CpuMetrics *m) {
    /* ... uses various sysctl() to gather information and fill |m| */
}
#else
#error "IMPLEMENTATION NOT DEFINED FOR THIS OS"
#endif

Also, if you're doing any other processing other than getting a value from the system, your code can be collapsed into
C:
int query_cpu_metrics(struct CpuMetrics *m) {
  int metric1, metric2;
#if   defined(Q_OS_WIN)
  /* fill metric1 and metric2 */
#elif defined(Q_OS_LINUX)
  /* fill metric1 and metric2 */
#elif defined(Q_OS_FREEBSD)
  /* fill metric1 and metric2 */
#else
#error "IMPLEMENTATION NOT DEFINED FOR THIS OS"
  /* or return 1, if you prefer.  */
#endif
  /* trivial case, but perhaps there is common processing to be done
   * once you've got the equivalent metrics. Avoid duplication! */
  m.foo = metric2 - metric1;
  return 0;
}

In addition, this can throw compilation rather than runtime errors if anyone tries to compile on an OS you didn't expect, which is also a good thing.
 
Again, if you're really trying to keep it simple, I fail to see how using the predefined-for-you (either the QT flavors or the generic OS ones like __linux__) macros to check what OS is being compiled for and #ifdef sections is more complicated than what you're doing. You could have (in one c file, os_impl.c) that always gets compiled (no fiddling with which sources to include based on platform) with the correct bits for the OS.

Yes, I think I will do it that way. My question was more about a rule of thumb when/if to use automake (and if it would help in such a scenario anyway?). A big project like htop seems to use autoconf/autogen.sh/configure step and it's running on multiple systems. Again, I will start with using the simple #ifdef / Makefile hint that was presented earlier by Bob B.
 
qmake is a Makefile generator. Once you have generated your makefile, you run 'make' to perform your build.

qmake knows a fair bit about platform dependent options. For each platform it will load a config file that sets these. If you have additional needs then you can add something like

freebsd: QMAKE_CXXFLAGS += {your FreeBSD options}

You can isolate platform dependent functionality and have it build only on the matching platforms, for instance

freebsd: SOURCES += mystuff_freebsd.cpp

Lastly, when using Qt it will set a bunch of platform/compiler specific definies (from the platform config file) as described by Eric.
 
Yes, I think I will do it that way. My question was more about a rule of thumb when/if to use automake (and if it would help in such a scenario anyway?). A big project like htop seems to use autoconf/autogen.sh/configure step and it's running on multiple systems. Again, I will start with using the simple #ifdef / Makefile hint that was presented earlier by Bob B.
Don't use automake unless you have no choice. It's kinda broken in concept and practice. For example, make -B on GNU make will make it recurse indefinitely.

I also use seperate BSDmakefile and GNUmakefile files in my projects. I recently ran into a link on writing portable make files that I found informative.
A Tutorial on Portable Makefiles
 

Quite an interesting read but the part titled: Dependency management
is not particularly portable in terms of execution. Sure, great for clang and gcc but what about tcc, pcc, wcc, bcc, cl, etc...

Much better to use CMake in my opinion. Then you don't need a separate build system for Windows or POSIX.

Also your users will know how to build a CMake project. If you use a non-standard build setup, it can be awkward to dig through to work out how things go together.
A great example of this is the CDE project. It took me ages to work out how all the ancient crap worked, even after asking the other developers ;)
 
In my view, CMAKE should (only) be used where there's a requirement to support VS Project files, and even then, it's not a bad idea to provide some other kind of build arrangements for elsewhere, as does zeromq.

CMAKE isn't well documented as far as I can tell, I've often had problems with it in basic and customized use.

When it runs, cmake is run, not the tools directly. It's maybe good for Windows, but less so for POSIX.
 
I also use seperate BSDmakefile and GNUmakefile files in my projects. I recently ran into a link on writing portable make files that I found informative.
A Tutorial on Portable Makefiles

That is a great tutorial, thanks! Will read that soon...

I ended up with a Makefile that includes some C files based on the system type. That works and is simple enough for me. I understand the recommendation about CMake if you're building for Windows systems. In my case, i don't have a need for that. I am using the qmake approach for a portable "client"-based Qt application and a "cross-unix" Makefile for my worker nodes (they communicate over a PostgreSQL database / message passing system).

The only think that bothers me is that I really need to use GNU Make (since there is no conditional statement in BSD Make.. or am I wrong?)

That all said, it's very interesting to hear what others recommend for such an issue. If anyone still wants to share her/his preferred way, please let me know.
 
BSD Make does have conditionals, the syntax is slightly different from GNU Make.

For example, BSD Make:
# FICL_WANT_MINIMAL # If set to nonzero, build the smallest possible Ficl interpreter. .ifdef FICL_WANT_MINIMAL CFLAGS += -DFICL_WANT_MINIMAL=$(FICL_WANT_MINIMAL) .endif

And GNU Make:
# FICL_WANT_MINIMAL # If set to nonzero, build the smallest possible Ficl interpreter. ifdef FICL_WANT_MINIMAL CFLAGS += -DFICL_WANT_MINIMAL=$(FICL_WANT_MINIMAL) endif

It's worth taking a look in /usr/share/mk
 
BSD Make does have conditionals, the syntax is slightly different from GNU Make.

Thanks! Didn't know that and yes, the sample file has a lot of good examples in it. However.. this syntax is not readable by GNU make. Which is.. meh :-/

Either I am using GNU Make and have to install gmake on FreeBSD, or use BSD Make and install it on Linux guests. Or is there any other way?
 

Yes, good article. I read both of them and came up with a solution. It is based on the variable substitution to add C files if a certain OS is detected. I think it is not possible to "determine" the OS by using POSIX make (at least, I haven't seen it in the specification), so I am doing it by:
Code:
make OS=$(uname)
Based on that OS variable, some C files are added to a CFLAGS variable or not.
I think i am happy with that. If things are getting more complicated, I will switch to GNU make eventually.
 
Back
Top