crispyambulance 2 days ago

Every time I see stuff like this it makes me think about optical design software.

There are applications (Zemax, for example) that are used to design optical systems (lens arrangements for cameras, etc). These applications are eye-wateringly expensive-- like similar in pricing to top-class EDA software licenses.

With the abundance GPU's and modern UI's, I wonder how much work would be involved for someone to make optical design software that blows away the old tools. It would be ray-tracing, but with interesting complications like accounting for polarization, diffraction, scattering, fluorescence, media effects beyond refraction like like birefringence and stuff like Kerr and Pockels, etc.

  • hakonjdjohnsen 2 days ago

    This, very much this!

    I do research in a subfield of optics called nonimaging optics (optics for energy transfer, e.g. solar concentrators or lighting systems). We typically use these optical design applications, and your observations are absolutely correct. Make some optical design software that uses GPUs for raytracing, reverse-mode autodiff for optimization, sprinkle in some other modern techniques you may blow these older tools out of the water.

    I am hoping to be able to get some projects going in this direction (feel free to reach out if anyone are interested).

    PS: I help organize an academic conference my subfield of optics. We run a design competition this year [1,2]. Would be super cool if someone submits a design that they made by drawing inspiration from modern computer graphics tools (maybe using Mitsuba 3, by one of the authors of this book?), instead of using our classical applications in the field.

    [1] https://news.ycombinator.com/item?id=42609892

    [2] https://nonimaging-conference.org/competition-2025/upload/

    • bradrn 2 days ago

      > I am hoping to be able to get some projects going in this direction (feel free to reach out if anyone are interested).

      This does sound interesting! I’ve just finished a Masters degree, also in non-imaging optics (in my case oceanographic lidar systems). I have experience in raytracing for optical simulation, though not quite in the same sense as optical design software. How should I contact you to learn more?

      • hakonjdjohnsen 2 days ago

        Interesting! I added an email address to my profile now

        • bradrn a day ago

          Great! I’ll send you an email now.

    • accurrent 2 days ago
      • hakonjdjohnsen 2 days ago

        Yes, exactly. I have not looked at Mitsuba 2, but Mitsuba 3 is absolutely along these lines. It is just starting to be picked up by some of the nonimaging/illumination community, e.g. there was a paper last year from Aurele Adam's group at TU Delft where they used it for optimizing a "magic window" [1]. Some tradeoffs and constraints are a bit different when doing optical design versus doing (inverse) rendering, but it definitely shows what is possible.

        [1] https://doi.org/10.1364/OE.515422

        • roflmaostc a day ago

          Shameless plug, we use Mitsuba 3/Dr.JIT for image optimization around volumetric 3D printing https://github.com/rgl-epfl/drtvam

          • pjmlp a day ago

            It looks quite interesting, especially the part of scripting everything in Python with a JIT, instead of the traditional having to do everything in either C or C++.

            Looking forward to some weekend paper reading.

          • hakonjdjohnsen a day ago

            Looks really cool! I look forward to reading your paper. Do you know if a recording of the talk is/will be posted somewhere?

            • roflmaostc a day ago

              We presented this work at SIGGRAPH ASIA 2024. But I think they do not record it?

              Maybe in some time we also do an online workshop about it.

        • accurrent a day ago

          I dont know much about Optical engineering, but this sounds super exciting. I think I meant to point to mitsuba 3, not 2.

  • ska a day ago

    This is one example of an area where economic incentives make it difficult to shift.

      - There aren't that many people willing to pay for such software, but those that do *really* need it, and will pay quite a bit (passing that cost on of course). 
     
      - The technical domain knowledge needed to do it properly is a barrier to many
     
      - It needs to be pretty robust
    
    As a result, you end up with a small handful of players who provide it. They have little incentive to modernize, and the opportunity cost for a new player high enough to chase most of them off to other avenues.

    I think the main way this changes is when someone has already spend the money in an adjacent area, and realized "huh, with a little effort here we could probably eat X's lunch"

    Beyond that you at most get toy systems from enthusiasts and grad students (same group?) ...

  • Q6T46nT668w6i3m 2 days ago

    You’d be surprised! Everywhere I’ve worked, academic or industry, typically writes their own simulation software. Sometimes it’s entirely handwritten (i.e., end-to-end, preprocessing to simulation to evaluation), sometimes it’ll leverage a pre-existing open source package. I imagine this will become more and more common if, for no other reason, you can’t back-propagate an OpticStudio project and open source automatic differentiation packages are unbeatable.

  • amelius 2 days ago

    I once saw a youtube video of a guy who first modeled a pinhole camera in something like Blender3D and then went on to design and simulate an entire SLR camera.

    • Tomte 2 days ago
      • amelius 2 days ago

        Thanks, but it was a different video.

        I remember he had a lot of problems with the pinhole camera because the small size of the pinhole meant that rays had trouble going into the box, so to speak, and thus he needed an insane amount of rays.

  • zokier 2 days ago

    I'd imagine there is fairly wide gap between having a simulation engine core and an useful engineering application

    From academic side, I've found the work of Steinberg in this area extremely impressive. They are pushing the frontier to include more wave-optical phenomenon in the rendering. E.g. https://ssteinberg.xyz/2023/03/27/rtplt/

  • fooker 2 days ago

    Well, everyone who can build this niche software is already employed to build it.

    • Q6T46nT668w6i3m 2 days ago

      I think you’re overthinking this, e.g., Zemax’s optimization isn’t that different than the ray-tracing presented in this book. The sophistication truly comes from the users.

    • crispyambulance 2 days ago

      Yeah, perhaps.

      But the heavy-hitters in this field all seem to have very old-timey UI's and out-of-this-world pricing.

      Meanwhile, raytracing for computer graphics on GPU's is soooo performant-- it makes me wonder how much work needs to be done to make the equivalent of KiCAD for optical design.

      • fooker a day ago

        You're missing the point. The difficulty is not in the ray tracing, etc. It is in understanding the domain of the software and what needs to be done to make it useful.

        I completely agree that whatever simulation they have can be easily done better with modern GPUs.

  • liontwist a day ago

    > eye-wateringly expensive

    For you. Anyone doing design and manufacturing of optics will not blink at paying for software.

  • lightedman 2 days ago

    " I wonder how much work would be involved for someone to make optical design software that blows away the old tools"

    Depending on use case, it already exists for gem cutters. We can simulate everything from RI to fluorescence excitation.

  • echelon 2 days ago

    I predict PBR is going to fall to neural rendering. Diffusion models have been shown to learn all of the rules of optics and shaders, and they're instructable and generalizable. It's god mode for physics and is intuitive for laypersons to manipulate.

    We're only at the leading edge of this, too.

    • CyberDildonics 2 days ago

      Can you link the neural rendering animation you're talking about with some info on the total rendering times without any precomputation?

timeforcomputer 2 days ago

I love this book so much. The literate programming style I think inspired from Knuth's cweb, great writing, beautiful high-quality physical book worth buying but also free access to knowledge. The literate-programming style means you are almost immediately applying theory to a practical system, I keep having to take breaks to learn outside math/physics though, but it is self-contained in principle I think.

losvedir 2 days ago

All right, off topic but I've seen this a bunch lately and the term just really irritates my brain for some reason. What's its origin? "[adverb] based" just feels so wrong to me. Shouldn't that be a noun: "Evidence-based medicine", "values-based", "faith-based", etc. Does "physically based" bother anyone else?

  • giik 2 hours ago

    Fully agree. My brain hurts when i see adverbs in improper positions like so. It’s called an adverb for a reason…

  • gregw2 6 hours ago

    The term seems to go back as far as 1987 per Google ngram:

    https://books.google.com/ngrams/graph?content=physically+bas...

    (Tweak the patameters to end in 1995 to see what I mean.)

    I poked around papers from siggraph 1986-1988 and the closest I could find to the use of the term was a panel discussion in 1998:

    "STEVE FEINER: Our second speaker will be Professor Don Greenberg. ... Don's group has been at the forefront of computer graphics for some twenty years now, most recently doing pioneering work on physically based modeling and radiosity approaches." ( https://dl.acm.org/doi/10.1145/1402242.1402254 )

    I did run across a book using the term in 1994, "The State of the Art in Physically-based Rendering and its Impact on Future Applications" (to your point using the dash between physically and rendered): https://link.springer.com/chapter/10.1007/978-3-642-57963-9_...

    The term around then was basically a catch-all for raytracing, radiosity and BRDF-oriented rendering techniques as opposed to the triangle-texture-shading techniques. The former took way more compute-per-pixel/"visual impact" than the latter and thus was done in software while the latter was starting to fit into hardware transistor counts available at the time. There were a variety of sytheses of the various techniques, so some kind of shorthand was definitely needed.

    My guess, being mildly involved in the field in the mid-90s, is that while one could have in 1987 said "physics-based" rendering, the researchers, some of whom actually trained in physics, knew that even the techniques they were exploring (raytracing, radiosity, BRDF, light fields) were approximations of physics from the physics and optics literature and not actually grounded in the cycle of hypothesis and experimentation that would constitute "physics-based rendering".

    Later the term came to have more specific connotations. One good summary of it from a practical point of view when this book came out was: https://news.ycombinator.com/item?id=38110448

  • corysama a day ago

    It is a bit of a silly term. It was made mostly to contrast against the somewhat more adhoc methods that preceded it. The new technique was parameterized more around physical properties where the older ones were more about visual descriptions.

    This paper from Disney is what kicked off the movement https://disneyanimation.com/publications/physically-based-sh...

    • buildartefact 15 hours ago

      What’s hilarious is there’s nothing physically based about the Disney model. It’s empirical and It’s not even energy conserving.

      As sibling pointed out, physically based rendering precedes “PBR” by a looong time and goes much, much deeper than “I put a metalness map in my shader”

    • nxobject a day ago

      Note that the book is even older than that – I remember first reading it in 2009; apparently the 1st edition was in 2004!

  • curiousObject a day ago

    But what alternative can you suggest which doesn’t break grammar or usage precedents like “physically based”?

    Physics-based? Reality-based? Physically-derived?

    • roelschroeven a day ago

      Physics-based sounds perfectly fine to me.

      "X-based" to me is equivalent with "based on X". Physics-based = based on physics, evidence-based = based on evidence, values-based = based on values; all perfectly fine.

      Physically based feels correct in a sentence like "Our company is physically based in New York but we operate world-wide". But what does the "physically based" in "physically based rendering" mean?

      But I'm not a native speaker, what do I know.

  • msk-lywenn a day ago

    It bothers me too, but I’m French. I always assumed it was some corner of the language I don’t know

taeric 2 days ago

This and https://hint.userweb.mwn.de/understanding_mp3/index.html are both amazingly fun examples of literate programming that I recommend all of the time.

magicalhippo 2 days ago

I've mentioned it before, but this book is amazing in the way it covers both the theory in detail, as well as the implementation.

There's often a lot of details that matter when implementing something efficiently and well which the theory either hides or is a result of our hardware limitations like floating point numbers.

Almost all other programming books I've read cover either the theory in detail and gloss over the implementation details, or goes into a lot of implementation stuff but only cover the easy parts and doesn't give you a good indication of how to deal with the advanced stuff.

So, any other books out there that you've read that is like PBR?

keyle 2 days ago

off topic: I live in this strange world where I can read the code and understand what it does, regardless of the language(!)

but the algorithm for the theory looks approximately like this to me

    (ijfio) $@ * foiew(qwdrg) awep2-2lahl [1]
Is there a good book that goes FROM programmer TO math-wizardry that you would recommend, without falling into the textbooks? Ideally I'd like to be able to read them and turn them into pseudocode, or I can't follow.

[1]: e.g. https://pbr-book.org/4ed/Introduction/Photorealistic_Renderi...

  • stonemetal12 a day ago

    The text seemed to describe it quite well. They just use a bunch of physics jargon because the book approaches rendering from the physics side of things.

    Light leaving a point in a direction = light emitted from that point in that direction (zero if we aren't talking about a light bulb or something) plus all light reflected by that point in that direction.

    • keyle 19 hours ago

      Sure, but I get lost at the notations like big S, `|` and other things. Those notations seem to have many standards or I just can't see to follow.

      In pseudocode, or any programming language, I'm right there with you.

      • magicalhippo 18 hours ago

        I used to feel like you before I went to university and had a few math courses. Then it became a lot more clear.

        And it really isn't that bad in most cases, and isn't unlike how we learnt that "int 10h" is how you change graphics modes[1] in DOS back in the days.

        The "big S" is an integral, which is in most cases essentially a for-loop but for continuous values rather than discrete values. You integrate over a domain, like how you can for-loop over a range or discrete collection.

        The domain is written here as just a collection of continuous values S^2, so like a for-in loop, though it can also be from and to specific values in which case the lower bound is written subscript and the upper bound superscript.

        Similar to how you have a loop variable in a for-loop you need an integration variable. Due to reasons this is written with a small "d" in front, so in this case "dω_i" means we're integration (looping) over ω_i. It's customary to write it either immediately after the integral sign or at the end of the thing you're integrating over (the loop body).

        However dω_i serves a purpose, as unlike a regular discrete for-loop, integrals can be, lets say, uneven, and the "d" term serves to compensate for that.

        The only other special thing is the use of the absolute function, written as |cosθ_i|, which returns the absolute value of cosθ_i, the cosine of the angle θ_i. Here θ_i is defined earlier in the book as the vertical angle of ω_i relative to the surface normal at the point in question, which can be calculated using the dot product.

        So in programmer-like terms, it reads a bit like this in pseudo-code.

            function L_o(p, ω_o): 
              integral_value = 0
              for ω_i in S^2 do
                θ_i = dot(w_i, n)
                integral_value += f(p, ω_o, ω_i) * L_i(p, ω_i) * abs(cos(θ_i)) * dω_i
              return L_e(p, ω_o) + integral_value
        
        Note that the surface normal "n" is implicitly used here, typically it would be passed explicitly in code.

        What's special here is that unlike a normal for-loop, in math the changes in ω_i, represented by dω_i, are infinitesimally small. But in practice you can actually implement a lot of integrals by assuming the changes are small but finite[2].

        Anyway, this wasn't meant as a full explanation of integrals and such, but just an attempt to show that it's not all gobbledygook.

        [1]: https://en.wikipedia.org/wiki/INT_10H

        [2]: https://en.wikipedia.org/wiki/Riemann_sum

  • gspr a day ago

    As a mathematician by training who does a lot of programming for a living: This is the biggest misconception about math I see coming from programmers. There's frequently a complaint about notation (be it that it is too compact, too obscure, too gatekept, whatever) and the difficulty in picking out what a given "line" (meaning equation or diagram or theorem, or whatever) means without context.

    Here's the thing though: Mathematics isn't code! The symbols we use to form, say, equations, are not the "code" of a paper or a proof. Unless you yourself are an expert at the domain covered by the paper, you're unlikely to be able to piece together what the paper wants to convey from the equations alone. That's because mathematics is written down by humans for humans, and is almost always conveyed as prose, and the equations (or diagrams or whatever) are merely part of that text. It is way easier to read source code for a computer program because, by definition, that is all the computer has to work with. The recipient of a mathematical text is a human with mathematical training and experience.

    Don't interpret this as gatekeeping. Just as a hint that math isn't code, and is not intended to be code (barring math written as actual code for proof assistants, of course, but that's a tiny minority).

    • keyle 19 hours ago

      Fantastic read, thank you. I was never explained it this way, nor imagined it this way.

      I'll try keep an open mind and read more the surrounding content.

      That said, there is a syntactic language, which many equations use, and they seem to vary or be presented differently. The big S is one, the meaning of `|` which is probably not "OR" etc. I wish there would be a cheat sheet of sorts, but I feel it would be death by a thousand paper cuts with 300 standards of doing things(?)

    • jayd16 a day ago

      Maybe you're right and maybe that's the culture in mathematics but we don't have to like it.

  • jahewson 2 days ago

    Screenshot and ask ChatGPT. Works pretty well.

    • nxobject a day ago

      +1 – often, for me, since a lot of the computations are estimated anyway, the biggest thing I need to do is ask ChatGPT to break down the notation and give me the precise terminology.

    • keyle 2 days ago

      Neat idea, thanks!

phkahler 2 days ago

Is there a standard OpenGL (ES3) shader I can drop in a GPLed application that uses a decent (is there a standard?) BRDF similar to Schlick with red,green,blue, and roughness?

I've wanted to add this capability to Solvespace for a while. I did my own implementation with the Fresnel term and it looked OK, but I want something "standard" and correct.

  • jms55 2 days ago

    There is no standard.

    Filament is extremely well documented: https://google.github.io/filament/Filament.html

    glTF's PBR stuff is also very well documented and aimed at realtime usage: https://www.khronos.org/gltf/pbr/

    OpenPBR is a newer, much more expensive BRDF with a reference implementation written in MaterialX that iirc can compile to glsl https://academysoftwarefoundation.github.io/OpenPBR

    Pick one of the BRDFs, add IBL https://bruop.github.io/ibl, and you'll get decent results for visualization applications.

  • fsloth 2 days ago

    I think most realtime production uses something similar to Disney BRDF when they refer to PBR.

    https://media.disneyanimation.com/uploads/production/publica...

    I don't think there is a standard as such that would dominate the industry. It's all approximations to give artists parameters to tweak.

    IMHO - Being an ex professional CAD person I think PBR is the wrong visual style for most cad though. The main point of shading is to improve the perception of shape and part isolation there. Traditional airbrush based engineering illustration styles are much better reference. So something like Gooch with depth buffer unsharp masking IMHO would be much much more appropriate.

    • phkahler a day ago

      >> Being an ex professional CAD person I think PBR is the wrong visual style for most cad though. The main point of shading is to improve the perception of shape and part isolation there. Traditional airbrush based engineering illustration styles are much better reference. So something like Gooch with depth buffer unsharp masking IMHO would be much much more appropriate.

      That is some excellent feedback. I like that our current simple Snells law shader doesn't add specular highlights and can do flat shading by turning off all but the ambient light. I had never heard of "Gooch" and I don't recommend searching for it without "Gooch in the context of rendering". That looks interesting and we already have code to draw the silhouette edges.

      I do think adding a PBR based Fresnel component with a backlight gives a similar but less sharp delineation of edges of objects, but it tends toward white right on the edge.

      While people like a good PBR render, I agree that in CAD the objective of the rendering is to clearly convey the geometry AND the sketching tools and entities the user is manipulating.

  • Pathogen-David 2 days ago

    You might find value in the glTF sample renderer https://github.com/KhronosGroup/glTF-Sample-Renderer

    It won't be plug and play since you'd have to pull out the shaders and make them work in your app, but the implementation supports quite a few material variants. The PBR shader implementation starts in source/Renderer/shaders/pbr.frag

  • jayd16 a day ago

    There's no standard and even moving things from one 3D program to another can be a painful process if someone else didn't already build in some sort of conversation.

  • TinkersW 2 days ago

    Even if you have a standard specular & diffuse model, indirect lighting is really important and you can't really just drop that into a shader.

    If you want a standard I've go with OpenPBR, it is well documented and looks nice. Just skip the extra layers to begin with(fuzz/coat etc).

amelius 2 days ago

Why don't they link to the physical book?

  • mattpharr 2 days ago

    How would one link to a physical object?

    If this is what you’re asking: there are (perhaps too discreet) links at the bottom of each page to Amazon and MIT Press to purchase the physical book.

    • timeforcomputer 2 days ago

      I think the geo URI scheme might work if you have an exact location for the book.

  • rezmason 2 days ago

    Physically Based Reading