DirectX 8. Начинаем работу с DirectX Graphics
Шрифт:
This is actually much simpler than it may look. First, I made up a size for the panel just so we'd have something to work with. Next, I asked the device to create a vertex buffer that contained enough memory for four vertices of my format. Then I locked the buffer so I could set the values. One thing to note, locking buffers is very expensive, so I'm only going to do it once. We can manipulate the vertices without locking, but we'll discuss that later. For this example I have set the four points centered on the (0, 0). Keep this in the back of your mind; it will have ramifications later. Also, I set the texture coordinates. The SDK explains these pretty well, so I won't get into that. The short story is that we are set up to draw the entire texture. So, now we have a rectangle set up. The next step is to draw it…
Drawing the rectangle is pretty easy. Add the following lines of code to your Render2D function:
These lines tell the device how the vertices are formatted, which vertices to use, and how to use them. I have chosen to draw this as a triangle fan, because it's more compact than drawing two triangles. Note that since we are not dealing with other vertex formats or other vertex buffers, we could have moved the first two lines to our PostInitialize function. I put them here to stress that you have to tell the device what it's dealing with. If you don't, it may assume that the vertices are a different format and cause a crash. At this point, you can compile and run the code. If everything is correct, you should see a black rectangle on a blue background. This isn't quite right because we set the vertex colors to white. The problem is that the device has lighting enabled, which we don't need. Turn lighting off by adding this line to the PostInitialize function:
Now, recompile and the device will use the vertex colors. If you'd like, you can change the vertex colors and see the effect. So far, so good, but a game that features a white rectangle is visually boring, and we haven't gotten to the idea of blitting a bitmap yet. So, we have to add a texture. Texturing the Panel
A texture is basically a bitmap that can be loaded from a file or generated from data. For simplicity, we'll just use files. Add the following to your global variables:
This is the texture object we'll be using. To load a texture from a file, add this line to PostInitialize:
Replace [Some Image File] with a file of your choice. The D3DX function can load many standard formats. The pixel format we're using has an alpha channel, so we could load a format that has an alpha channel such as .dds. Also, I'm ignoring the ColorKey parameter, but you could specify a color key for transparency. I'll get back to transparency in a little bit. For now, we have a texture and we've loaded an image. Now we have to tell the device to use it. Add the following line to the beginning of Render2D:
This tells the device to render the triangles using the texture. One important thing to remember here is that I am not adding error checking for simplicity. You should probably add error checking to make sure the texture is actually loaded before attempting to use it. One possible error is that for a lot of hardware, the textures must have dimensions that are powers of 2 such as 64×64, 128×512, etc. This constraint is no longer true on the latest nVidia hardware, but to be safe, use powers of 2. This limitation bothers a lot of people, so I'll tell you how to work around it in a moment. For now, compile and run and you should see your image mapped onto the rectangle.
Note that it is stretched/squashed to fit the rectangle. You can adjust that by adjusting the texture coordinates. For example, if you change the lines where u = 1.0 to u = 0.5, then only half of the texture is used and the remaining part will not be squashed. So, if you had a 640x480 image that you wanted to place on a 640×480 window, you could place the 640×480 image in a 1024×512 texture and specify 0.625, 0.9375 for the texture coordinates. You could use the remaining parts of the texture to hold other sub images that are mapped to other panels (through the appropriate texture coordinates). In general, you want to optimize the way textures are used because they are eating up graphics memory and/or moving across the bus. This may seem like a lot of work for a blit, but it has a lot to do with the way new cards are optimized for 3D (like it or not). Besides, putting some thought into how you are moving large chunks of memory around the system is never a bad idea. But I'll get off my soapbox.
Let's see where we are so far. At one level, we've written a lot of code to blit a simple bitmap. But, hopefully you can see some of the benefit and the opportunities for tweaking. For instance, the texture coordinates automatically scale the image to the area we've defined by the geometry. There are lots of things this does for us, but consider the following. If we had set up our ortho matrix to use a percentage based mapping, and we specified a panel as occupying the lower quarter of the screen (for a UI, let's say), and we specified a texture with the correct texture coordinates, then our UI would automagically be drawn correctly for any chosen window/screen size. Not exactly cold fusion, but it's one of many examples. Now that we have the texture working well, we have to get back to talking about transparency.
As I said before, one easy way of adding transparency is to specify a color key value in the call to D3DXCreateTextureFromFileEx. Another is to use an image that actually has an alpha channel. Either way, specify a texture with some transparency (either with an alpha channel or a color key value) and run the app. You should see no difference. This is because alpha blending is not enabled. To enable alpha blending, add these lines to PostInitialize:
The first line enables blending. The next two specify how the blending works. There are many possibilities, but this is the most basic type. The last line sets things up so that changing the alpha component of the vertex colors will fade the entire panel by scaling the texture values. For a more in depth discussion of the available settings, see the SDK. Once these lines are in place, you should see the proper transparency. Try changing the colors of the vertices to see how they affect the panel.
By now our panel has many of the visual properties we need, but it's still stuck in the center of our viewport. For a game, you probably want things to move. One obvious way is to relock the vertices and change their positions. DO NOT do this!! Locking is very expensive, involves moving data around, and is unnecessary. A better way is to specify a world transformation matrix to move the points. For many people, matrices may seem a bit scary, but there are a host of D3DX functions that make matrices very easy. For example, to move the panel, add the following code to the beginning of Render2D: