- Aug 18, 2005
- 91
- 0
- 0
Hello everybody. Hopefully somebody out there can help me with this problem I am having. I have searched extensively online for an answer but nobody seems to come quite close.
I am trying to design a simple 2d physics engine that can handle anywhere from 500 to 1000 simple circles on the screen, interacting with each other, at one time. The way I have it set up now is I have a class that handles the creation of circles using sin and cos to get the required vertex data coordinates. I want to be able to dynamically size a vertex buffer so that I don't have to code a set size into the program - so I can have say a vertex buffer of size 0 on program start up, and eventually allow people to draw polygons (for interaction) and have the vertex buffer re size when they draw. I have some ideas such as when a person draws, destroy the vertex buffer and create a new one with the given size. Would this be the correct way to do it?
Question two. Aside from the re size problem, I am trying to get a demo going. Basically, I want to be able to hard code circle positions into the engine and have them draw accordingly using a single vertex buffer. (I don't want to have a vertex buffer for each object because it would be inefficient with say 500 objects) The problem I am having is the copying of circle object vertices to the buffer. I basically lock the buffer and try to copy each individual circle in the program using a for loop. For example
for(int i = 0; i < numberCircles; i++)
{
memcpy(pVoid, ball.getVerts(i), sizeof(Ball)
}
This is not having the desired effect. Basically what I want to happen is for each set of ball vertices memcpy should add the verts to the end of the buffer. Im not exactly sure how it works, but it seems that its clearing the entire buffer every time I call memcpy. I don't want this to happen. Is this what I should be doing? I considered using another vector in my program so that everytime a ball is created, its vertices get added to the end of the vector. Then I memcpy the vertices to the buffer and draw. This did not work.
Third question. Since I will have to update a lot of positions for balls each frame, I considered updating the position of the center point of each ball, then using an offset to calculate the rest of the positions of the other vertices. This did not seem efficient because from what I understand, the GPU should take care of transformations, not the CPU. I am now deciding to use a d3dxmatrix. If I use a matrix will the transformation automatically get handled by the GPU so that I don't have to hand code the offsets for each point in each ball? Say I do use matrices. Since there are say 1000 balls on screen, each ball has its own position. Should each ball then also have its own matrix? I would then have to go through a loop each frame to call each balls matrix and do a directx settransform function to get the desired position change for each ball. Is this the way to do it?
I am trying to design a simple 2d physics engine that can handle anywhere from 500 to 1000 simple circles on the screen, interacting with each other, at one time. The way I have it set up now is I have a class that handles the creation of circles using sin and cos to get the required vertex data coordinates. I want to be able to dynamically size a vertex buffer so that I don't have to code a set size into the program - so I can have say a vertex buffer of size 0 on program start up, and eventually allow people to draw polygons (for interaction) and have the vertex buffer re size when they draw. I have some ideas such as when a person draws, destroy the vertex buffer and create a new one with the given size. Would this be the correct way to do it?
Question two. Aside from the re size problem, I am trying to get a demo going. Basically, I want to be able to hard code circle positions into the engine and have them draw accordingly using a single vertex buffer. (I don't want to have a vertex buffer for each object because it would be inefficient with say 500 objects) The problem I am having is the copying of circle object vertices to the buffer. I basically lock the buffer and try to copy each individual circle in the program using a for loop. For example
for(int i = 0; i < numberCircles; i++)
{
memcpy(pVoid, ball.getVerts(i), sizeof(Ball)
}
This is not having the desired effect. Basically what I want to happen is for each set of ball vertices memcpy should add the verts to the end of the buffer. Im not exactly sure how it works, but it seems that its clearing the entire buffer every time I call memcpy. I don't want this to happen. Is this what I should be doing? I considered using another vector in my program so that everytime a ball is created, its vertices get added to the end of the vector. Then I memcpy the vertices to the buffer and draw. This did not work.
Third question. Since I will have to update a lot of positions for balls each frame, I considered updating the position of the center point of each ball, then using an offset to calculate the rest of the positions of the other vertices. This did not seem efficient because from what I understand, the GPU should take care of transformations, not the CPU. I am now deciding to use a d3dxmatrix. If I use a matrix will the transformation automatically get handled by the GPU so that I don't have to hand code the offsets for each point in each ball? Say I do use matrices. Since there are say 1000 balls on screen, each ball has its own position. Should each ball then also have its own matrix? I would then have to go through a loop each frame to call each balls matrix and do a directx settransform function to get the desired position change for each ball. Is this the way to do it?