If this FAQ does not answer your question, please send it to us.
Table of Contents
Using one Context from
multiple Threads
Using Multiple Cores
with OpenGL
Using Multiple
OpenGL contexts with one Graphics Card
Using Multiple
Graphics Cards with OpenGL
Using one
Context from multiple Processes
Further
Information
1. Using one Context from multiple Threads
- Q: Why does my OpenGL application crash/not work when I am rendering from another thread?
- A: The OpenGL context is thread-specific. You have to make it current in the thread using glXMakeCurrent, wglMakeCurrent or aglSetCurrentContext, depending on your operating system.
- Q: Why does it not work if I make my context current in another thread?
- A: One OpenGL context can only be current in one thread. You have to release the context in the the other thread first, by making another or no context current.
- Q: So how can I then make use of multiple processors with OpenGL?
- A: See Section 2.
- Q: Why does my X11 application still crash, even though I am handling the context correctly?
- A: X11 is also not thread safe by default. You either have to call XInitThreads during initialization, or to use one Display connection per thread. In the second case you can't use the glXContext from one Display connection with another connection.
2. Using Multiple Cores with OpenGL
- Q: How can I use multiple threads with one OpenGL context?
- A: The preferred way is to dedicate one thread to dispatch the OpenGL commands, and offload the CPU-intensive operations to worker threads.
- Q: What operations can I perform in parallel to an OpenGL dispatch thread?
- A: A typical approach is to decouple the application thread from the draw thread. The application thread implements the application logic and event processing, while the draw thread renders the database. The database has not to be modified during rendering, either by multibuffering data or deferring the data update. An extension of this model is to have a separate culling thread pipelined between the application and dispatch thread ('app-cull-draw').
- Q: How does the multithreaded OpenGL in OS X work?
- A: It offloads CPU-intensive operations, for example pixel format conversions, to a second thread. A lot of OpenGL commands do not benefit from it, since they are passed mostly unmodified to the graphics card. Multithreaded OpenGL is not enabled by default.
- Q: Can I use multithreaded OpenGL on Window or Linux?
- A: We are not aware of other drivers implementing this feature. It can be emulated by performing the CPU-intensive operations from a second thread with a second, shared context.
3. Using Multiple OpenGL contexts with one Graphics Card
- Q: How should I update multiple windows on one graphics card?
- A: It is advisable to minimize switching between different contexts. If the draw operation is not CPU-bound, the windows should be updated sequentially. If the draw operation is CPU-bound, the CPU-intensive part should be pipelined with the draw thread (see Section 2) while still updating the windows sequentially from the draw thread. If that is not feasible, updating the windows from multiple threads might be faster on a multicore machine.
- Q: Why is my application so slow when rendering from multiple threads?
- A: The OpenGL driver has to schedule the threads on a single hardware resource. This causes a lot of context switching, which is particularly slow when each context uses a lot of GPU memory. Try updating your windows sequentially.
- Q: How can I minimize the memory used by multiple OpenGL contexts?
- A: Share the context by using the appropriate option to glXCreateContext, aglCreateContext or wglShareLists. Create your OpenGL objects (textures, VBO's, etc.) in one context and then use them in all the other contexts.
4. Using Multiple Graphics Cards with OpenGL
- Q: How do the data types in my program relate to my hardware on the different operating systems?
- A: The following table is a high-level overview:
GLX WGL AGL GPU Display* and Screen1 (affinity) HDC2 CGDirectDisplayID or CGOpenGLDisplayMask OpenGL Context GLXContext HGLRC AGLContext Window XID HWND WindowRef PBuffer XID HPBUFFERARB AGLPbuffer Pixel Format int (Visual ID) int AGLPixelFormat Pixel Format Info XVisualInfo or GLXFBConfig PIXELFORMATDESCRIPTOR AGLPixelFormat DefaultScreen( display )
is used to determine the screen given by the the display string (see below).
2 The core WGL implementation does not select a specific GPU -- see below for details.
- Q: How do OpenGL programs behave with multiple graphics cards on Linux (X11)?
- A: Normally, the OpenGL commands are only send to the card belonging to the screen used to open the display connection to the X server. That means that when Xinerama is used to virtualize the graphics cards, the window will only display OpenGL content on one card.
- Q: How to I address a specific graphics card on Linux (X11)?
- A: The graphics card is selected using XOpenDisplay. Typically one X server is used for all cards, and the card is addressed using the screen number, i.e. the number after the dot in the display name (":0.[screen]"). Sometimes one X server is used for each card, in which case the GPU's can be addresses using the server number in the display name (":[server].0").
- Q: How do OpenGL programs behave with multiple graphics cards on Windows 2000/XP?
- A1: nVidia: The driver dispatches the OpenGL commands to all cards. This allows to move the window across cards, but incurs a small performance overhead.
- A1: ATI: The driver dispatches the OpenGL commands to card the window was created on.
- Q: How do I address a specific nVidia graphics card on Windows 2000/XP?
- A: The WGL_NV_gpu_affinity extension, available on nVidia Quadro cards, can be used to restrict the OpenGL calls to certain cards for optimal performance.
- Q: How do I address a specific ATI graphics card on Windows 2000/XP?
- A: For an on-screen window, just create it on the right position for your GPU. For off-screen drawables, use the WGL_AMD_gpu_association extension.
- Q: How do I address a specific nVidia graphics card on Windows 7/Vista?
- A: The WGL_NV_gpu_affinity extension currently only supports FBO-based rendering. NVIdia release 256 and later drivers to exhibit the same limitation on Windows XP and 2000. It is not possible to select a specific GPU to render into a window or PBuffer.
- Q: How do OpenGL programs behave with multiple graphics cards on Mac OS X?
- A: The OpenGL rendering happens on the card where most of the pixels of the window are located. Areas of the window located on other graphics cards are copied from the main renderer.
- Q: How to I address a specific graphics card on Mac OS X (AGL)?
- A: OS X 10.4 and earlier: Use the handle obtained by DMGetGDeviceByDisplayID for .
- A: OS X 10.5: Use the display mask returned by CGDisplayIDToOpenGLDisplayMask as the value for the AGL_DISPLAY_MASK pixel format attribute.
- Q: Why can my application only address one of my graphics cards?
- A: Disable SLI or Crossfire. When it is enabled, the driver virtualizes the GPUs and presents them as one GPU to the applicatoin.
- Q: Why does my SLI/Crossfire setup only use one CPU?
- A: Applications using SLI or Crossfire use one OpenGL rendering thread, the same way as any other OpenGL application. The driver sends the OpenGL commands to the underlying graphics cards. This whole process is single-threaded, the parallel processing happens only later on the individual graphics cards.
- Q: Why does my application not work when I am sharing the context between the pipes?
- A: Context sharing is only supported on a single pipe. The OpenGL objects are created only on one card and are therefore not usable on any other.
- Q: What other problems might arise when using multiple graphics cards?
- A: When using different cards the OpenGL implementation might be different, and the available OpenGL extension and entry points might differ. It is also not possible to share the contexts between graphics cards (see above).
- Q: How do I synchronize the output of multiple monitors?
- A: If the synchronization has to be perfect, a hardware solution like nVidia GSync has to be used to synchronize the video signal and buffer swap. When using monitors for display, a software synchronization is often good enough.
- Q: Why do I still have a significant delay between the buffer swap of multiple displays, even when using a software barrier before my swap buffers call?
- A: OpenGL commands are buffered before execution. When using a software swap synchronization, call glFinish before entering the swap barrier to complete all outstanding OpenGL commands. Note that glFinish is bad for the performance, consider using a hardware synchronization mechanism if this is an issue.
5. Using one Context from multiple Processes
- Q: Can I use a context created in one process in another Process?
- A: No, with the exception noted in the next question.
- Q: So what about EXT_import_context?
- A: This extension lets you indeed share an indirect context between processes. Indirect means that all OpenGL commands are send through the glX wire protocol, instead of directly to the GPU. This results in very slow performance, and often in less functionality.