Well, for one, I am a PHP developer. There is definitely a lot more to the native languages, but I am confident I can learn that stuff. I know some C++ basics too.Kameleon wrote:How do you know it's easy if you don't know much about programming?
I looked up the file you linked, but all I could find in it was an example with negative blurOverlap for the "gap". This does simulate a shutter, but not a rolling shutter. Are you sure you know what I mean?fau wrote:Functionality you're describing is already present in jomme, it's motion blur with a half frame-time gap (don't remember exact cvar names). JK2 at 24 fps with rolling shutter simmulation looks awful though. I made few clip for comparison a year or two ago, but don't have this specific one uploaded.
I implemented motion blur and described it here (including rolling shutter example): https://github.com/aufau/jedi-outcast/b ... ME-MME.txt you can achieve the same in jomme, although cvars are different.
One more thing. That file says the JK2 engine can't run above 1000fps, which I doubt, because I recorded 2000fps scenes a few years back. I also tried joMME with 32 blurframes and 200 fps (6400 fps). Seemed to have worked without troubles.fau wrote:Functionality you're describing is already present in jomme, it's motion blur with a half frame-time gap (don't remember exact cvar names). JK2 at 24 fps with rolling shutter simmulation looks awful though. I made few clip for comparison a year or two ago, but don't have this specific one uploaded.
I implemented motion blur and described it here (including rolling shutter example): https://github.com/aufau/jedi-outcast/b ... ME-MME.txt you can achieve the same in jomme, although cvars are different.
No, I didn't mean rotary disc shutter, but I am glad that this one is already implemented. Cool stuff.fau wrote:I thought you meant this: https://en.wikipedia.org/wiki/Rotary_disc_shutter
In original engine time is counted in milliseconds, so at 2000 fps every other frame is simply a copy of previous one. However there is a simple hack to get movement at fps higher than 1000fps which I wanted to describe, but looks like jomme already has it implemented so you're right, it should work.
How to start programming for jomme… do you know C?
Answering to your "how hard it would be?" question – rather easy (although making a usable cvar interface for this rather than one-time hack may be slightly more engaging). I understand JK2 is a medium-large project and can be intimidating at start, I could give you instructions what should be done.
Also keep in mind you could just render at huge fps and use some post-processing tool or a script that would generate rolling shutter effect combinig these frames, it would be awfully slow though. For vertical rolling shutter effect at 720p and 30 fps you'd need to render 720 * 30 = 21600 frames per second.
Aight!fau wrote:Yeah, I will have a look and try to describe how I would take on it. I'd rather do it here in public, maybe it will be useful to someone else.
I've heard jomme has somewhat broken build system on windows. Try setting BUILD_FREETYPE preprocessor macro to 0 in cmake.Tom Arrow wrote:Alrighty! Thanks for this writedown. I am only doing this by-the-side, that's why I respond so lately.
I have downloaded the project and opened it in Visual Studio 2013 Express. I tried a quick compile (just hit the Build button) and got this error:
error C1083: Cannot openinclude file 'ft2build.h': No such file or directory.
It comes from line 81 in the file tr_mme_font.cpp. I assume I need to somehow download the FreeType sources and use them, but where do I put them so that Visual Studio recognizes them?
qgl* functions correspond to the same OpenGL functions with gl prefix instead of qgl. You can easily google their documentation: https://www.opengl.org/sdk/docs/man2/xh ... Pixels.xmlAs for the rest of what you wrote, I looked into the code. The main capturing seems to happen in 'R_MME_GetShot'.
This is the code that creates the actual data:
qglReadPixels( 0, 0, glConfig.vidWidth, glConfig.vidHeight, GL_RGB, GL_UNSIGNED_BYTE, 0 );
Then it seems to be fetched by this code:
qglBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[index]);
GLubyte* ptr = (GLubyte*)qglMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB);
I can't find an actual writedown of that function, so I assume it's some kind of OpenGL API thing. (Sorry, never did anything with game or 3D code)
Single pixel per frame won't work. Let's do some estimations. Game engine does not only render frames, but also runs game physics, processes user input etc. each frame. No matter what you do it probably won't run at fps higher than few thousands. If you want to grab only one pixel each frame, at 720p you need 1280 x 720 = 921600 frames to generate a single "combined" frame! 921600 / 1000 = 921,6 seconds for a single output frame. This is way too slow for any practical usage. I thought of capturing one row per frame, this means only 720 in-game frames for one output frame.Anyhow, it looks easy enough to specify a single pixel for reading. Depending on the API, it may be possible to do the qglReadPixels call for each pixel and then keep the rest of the code as it is. Or it may be necessary to do the read-out after every pixel and then somehow put all the resulting data together into one variable.
Just advance whole game by one frame. R_MME_TakeShot doesn't generate blurred frame in one shot. It accumulates (see "accum" functions) current frame to motion blur buffer and only when it combines enough these frames blended together create a single, blurred output frame.I suppose the other big question is: How do I, within R_MME_GetShot, advance the time for a few nanoseconds and then make another read-out? (if possible at all). I assume that there are separate functions that actually move all the objects etc based on the time and I assume these functions are somwhere in the main loop. Although I wonder how this plays together with R_MME_TakeShot, as this one seems to already incorporate all the motion blur stuff etc, without referencing any time variables at all.
Quick comment on your second post (haven't noticed): instead of grabbing the same row for few frames and combining them, you can just grab few rows each frame and combine these rectangles for the same effect. When you look at it this way you can see that motion blur doesn't produce much overhead at all because you need to render all these frames anyway for rolling shutter effect.I guess I'll get a better understanding if I take a deeper look at the code, but if you can provide any quick answers to guide me through delving into the code further, I'll be happy.
As for how to implement it, I would opt for (if possible) adjusting the code assuming that motion blur is disabled, which should do the trick for a quick-n-dirty approach. Once I get that working, I can think about how to make it work together with the motion blur. Or not, depending on how difficult that would seem.
Thanks, I found that out too.qgl* functions correspond to the same OpenGL functions with gl prefix instead of qgl. You can easily google their documentation: https://www.opengl.org/sdk/docs/man2/xh ... Pixels.xml
Well, only trying out wil provide an answer to that, but your pixel-approximation idea is funny as hell, might as well try it. Besides, the pixel-based effect of rolling shutter I can probably live without. It is hardly noticeable anyway, my guess.Single pixel per frame won't work. Let's do some estimations. Game engine does not only render frames, but also runs game physics, processes user input etc. each frame. No matter what you do it probably won't run at fps higher than few thousands. If you want to grab only one pixel each frame, at 720p you need 1280 x 720 = 921600 frames to generate a single "combined" frame! 921600 / 1000 = 921,6 seconds for a single output frame. This is way too slow for any practical usage. I thought of capturing one row per frame, this means only 720 in-game frames for one output frame.
Ah, aight. Seems I have some understanding to do yet.Just advance whole game by one frame. R_MME_TakeShot doesn't generate blurred frame in one shot. It accumulates (see "accum" functions) current frame to motion blur buffer and only when it combines enough these frames blended together create a single, blurred output frame.
You mean, taking the whole frames and then simply combining the relevant rows for each final shot? Well, I suppose we better put this on the back burner, until I get anywhere with the basic effect.Quick comment on your second post (haven't noticed): instead of grabbing the same row for few frames and combining them, you can just grab few rows each frame and combine these rectangles for the same effect. When you look at it this way you can see that motion blur doesn't produce much overhead at all because you need to render all these frames anyway for rolling shutter effect.
jomme already has a hack for this I mentioned earlier, search for cg.timeFraction. It's used only in few places where it really matters. Converting whole code to using floating point time would be a huge work and unnecessary. Also you'd need to use double not float. Server time can go up to 2^31 milliseconds (24 days) and a float can hold only 23 bits of precision.1. A very low timescale on the vanilla jomme code did not do anything beyond 0.15 or so. I have found that there is a limit for the msec going < 1 in the jk2 code. I removed that. After that, the game just got stuck on values below 0.15 or so. After I replaced everything else with the float values, it seems to work quite nicely. BUT, what I was wondering is why it is that even lower timescale values DO work on vanilla code if capturing is activated. What sorcery is this?
I don't know how DoF is implemented in jomme, there are few more simple and fast methods other than what you described. Follow cvars controlling DoF to find out.2. I do see the calculation of the higher framerate based on blurFrames in CL_Frame, but I can't see any apparent implementation of the DoF mechanism. Where is it hidden and how does it achieve what it does (changing camera position and basically re-rendering without advancing time)?