2911
Off Topic / Re: [MEGATHREAD] Personal Computer - Updated builds thanks to Logical Increments
« on: August 26, 2014, 05:14:22 PM »
um is that your monitor. because i will hit you if it is
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Many workstation applications, particularly in the CAD market, offer the option of using antialiased points and lines (sometimes called “wireframe”). With this option turned on, component edges can be viewed as precisely as possible without encountering the aliasing artifacts that are associated with lines displayed on a rasterized display.
To address this feature in professional workstation applications, the NVIDIA Quadro GPU family supports antialiased lines in hardware. The result? When antialiased points and lines are used on the NVIDIA Quadro family of GPUs, performance is noticeably higher than on the GeForce family of GPUs.
Another hardware feature difference between NVIDIA’s workstation and consumer GPUs is support for OpenGL Logic Operations
Logic operations are often used by workstation applications in mechanical computer-aided design (MCAD) and digital content creation (DCC) markets. They’re used to draw on top of a 3D scene to make specific features visible without significantly changing or complicating the existing drawing functions or adversely affecting performance.
A similar example in a DCC application is demonstrated where the XOR logic
operation is used to draw sophisticated cursors, such as those in the paint operation of Alias’ Maya application.
The XOR logic operation draws the cursor on top of the 3D scene for applications that do not support overplay planes.
If the XOR logic operation is enabled, the performance drop of the NVIDIA Quadro is minimal when compared to that of GeForce. In professional applications where logic operations are used, this equates to significant improvement in performance—a definite productivity benefit.
During a typical workflow, workstation applications pop up many windows for menus or alternative views of components or scenes. Unlike consumer applications such as games, these applications often occupy the full screen, so the result is many
overlapping windows. Depending on how they are handled by the graphics hardware, overlapping windows may noticeably affect visual quality and graphics
performance.
NVIDIA’s Quadro GPU architecture manages the transfer of data between a window and the overall frame buffer by clip regions. When a window has no overlapping windows, the entire contents of the color buffer can be transferred to the frame buffer in a single, continuous rectangular region. However, if other windows overlap the window, the transfer of data from the color buffer to the frame buffer must be broken into a series of smaller, discontinuous rectangular regions. These rectangular regions are referred to as “clip regions.”
Most consumer applications and games don’t create many pop-up windows, so the GeForce family of GPUs supports only one clip region, whereas the NVIDIA Quadro family support up to eight clip regions.
In many situations, understanding the relationship between components in a complex 3D can be eased by using clip planes. Clip planes allow sections of the geometry to be cut away so the user can look inside solid objects.
The NVIDIA Quadro family of GPUs supports clip-plane acceleration in hardware—a significant performance improvement when it is used in professional applications.
Another feature offered by the NVIDIA Quadro family of GPUs is memory management optimization, which efficiently allocates and shares memory resources between concurrent graphics windows and applications. In many situations, this feature directly affects application performance, and so offers demonstrable benefits over the consumer-oriented GeForce GPU family.
The exact barrier varies by motherboard and I/O device configuration, particularly the size of video RAM; it may be in the range of 2.75 GB to 3.5 GB
It is a common misconception that 32-bit processors and operating systems are limited to 4 GB (232 bytes) of RAM, as were the original 80386DX and other early IA-32 CPUs. Since the 1995 Pentium Pro, almost all modern x86 processors can in fact already address up to 64 GB (236 bytes) RAM via physical address extension (PAE).
Chipsets and motherboards allowing more than 4 GB of RAM with x86 processors do exist, but in the past, most of those intended for other than the high-end server market supported only 4 GB of RAM.
In Microsoft's "non-server", or "client", x86 editions of Microsoft Windows: Windows XP, Windows Vista, Windows 7, Windows 8, and Windows 8.1, the 32-bit (x86) versions of these are able to operate x86 processors in PAE mode, and do so by default as long as the CPU present supports the NX bit. Nevertheless, these operating systems do not permit addressing of physical memory above the 4 GB address boundary. This is not an architectural limit; it is a limit imposed by Microsoft via license enforcement routines as a workaround for device driver compatibility issues that were discovered during testing.
Why does everyone jump on the Vista-hate bandwagon? I just don't get it.that kind of answers itself. its a bandwagon. and people will always jump on it if its popular enough.
FINALLY someone understands the context of that quote. every time it was brought up in some sort of argument i just wanted to pull my nostril hairs out
still pretty sickening more than one person voted thatno its not. learn to respect other peoples beliefs you jackhead
[im g]http://i.imgur.com/kZ9Hr3T.jpg[/img]hey good to see the crips and bloods rolling together again, wait what
H-how algebraic...do you want to simplify this problem? i will have no trouble applying the quadratic formula on you, you obtuse trigonometric identity