The point is to show it’s uncapped, since SDR is just up to 200 not. It’s not tonemapped in the image.
But, please, continue to argue in bad faith and complete ignorance.
The point is to show it’s uncapped, since SDR is just up to 200 not. It’s not tonemapped in the image.
But, please, continue to argue in bad faith and complete ignorance.
From understanding my old GameBoy that had 4 AA batteries in alternating rotation, that had 6V (1.5V each battery). Chaining positive and negative together increased the voltage.
Since this has them pointing both up, it’s just 1.5V but it’s as if you put a half sized battery.
Basically, the same, just less amperage because of a smaller battery (if compared to 2 of the same).
tl;dr: same, but half capacity.
He should argue his grievances to some sort of tribunal presided over by one or several judges in which legal issues and claims are heard and determined: one specifically that specializes in mammalians of the marsupial sort.
This is a trash take.
I just wrote the ability to take a DX9 game, stealthy convert it to DX9Ex, remap all the incompatibility commands so it works, proxy the swapchain texture, setup a shared handle for that proxy texture, create a DX11 swapchain, read that proxy into DX11, and output it in true, native HDR.
All with the assistance of CoPilot chat to help make sense of the documentation and CoPilot generation and autocomplete to help setup the code.
All in one day.
Helm Dawson tonemapper is a filmic tonemapper built by EA years ago. It’s very contrasty, similar to ACES (What Unreal mimics in SDR and uses for HDR).
The problem is, it completely crushes black detail.
https://www.desmos.com/calculator/nrxjolb4fc
Here’s it compared to the other common Uncharted2 tonemapper:
Everything under 0 is crushed.
To note, it’s exclusively an SDR tonemapper.
I’ve found this tonemapper in Sleeping Dogs as well and when modding that game for HDR, it was very noticeable there how much it crushed. Nintendo would need to change the tonemapper to an HDR one or, what I think they’ll do, fake the HDR by just scaling up the SDR image.
To note, I’ve replaced the tonemapper in Echoes of Wisdom with a custom HDR tonemapper via Ryujinx and it’s entirely something Nintendo can do. I just doubt they will.
“If the answer is yes, you should be incredibly proud of yourself.” (My guess)
I decompiled Echoes of Wisdom. It uses the pretty horrible Hejl Dawson tonemapper. Pretty sure the HDR is going to be fake inverse tonemapping.
Lan ports have been standard, thankfully, since the Switch OLED.
Yeah, I can see that. Search has gotten worse. While AI slop is undoubtedly responsible for this, there are cases when some things are essentially best solved by reading thousands of code examples because the documentation is rather vague. Searching on Stackoverflow still relies on some people having already been presented with a similar situation and shared their solution. Also, you’d assume the solution is the correct one. (I’ve been burned and I’m sure the majority of my stackoverflow answers end up being corrections well after trying something else touted as the correct/popular solution.) That’s really my push back.
That’s really one of the strengths of AI: a large feeding of data until it finds a common pattern. It correlates to simple things like syntax. That means it’s pretty good there. But it also correlates to saying “a lot of people set up scripts like this”. That’s where I’m reminded of working with people who I assign a task to and they come back with stuff they got from SO. It has the gist of it being right, but not all there.
That’s kinda the key, though. I could be okay with an 80% workable state. That’s like asking somebody to compile all the search results and give be back a result as best they could. It doesn’t mean it should be treated as hot pluggable code.
Full disclosure, my main experience is CoPilot and VSCode. It’s… neat. Some of the auto complete is useful when what I’m writing has an obvious pattern. Some is laughably unrelated. There is another AI that has some level of training to it, which I think is Facebook’s. It can be “trained”. I’ve tried those models, but all those offline models don’t have the ability to combine web results. CoPilot lets you link to a spec page and it’ll read it in “realtime” and correct itself. I find that much more valuable than some pretrained model. The saddest part is that’s all proprietary in ChatGPT which was supposed to be Open (OpenAI). You basically have to buy-in to their models at least until something else comes along.
Thanks. That’s my point.
Again. Absolutely no point related to AI and anything about the content I said. Maybe you don’t know what ad-hominem is? It means attacking the person, not the topic. How many times did you say “you” in reply. Count them. How many times did you address any of my points? 0.
Had this exact thought. But number must go up. Hell, for the suits, addiction and dependence on AI just guarantees the ability to charge more.
The first sentence of my comment?
Make a point or go away. Ad-hominem is nonsense is boring.
Not all projects needs VC money to get off the ground. I’m not going to hire somebody for a pet project because CMake’s syntax is foreign to me, or a pain in the ass to write. Or I’m not interested in spending 2 hours clicking through their documentation.
Or if you ever used DirectX the insane “code by committee” way it works. Documentation is ass and at best you need code samples. Hell, I had to ask CoPilot to tell me how something in DXCompiler worked and it told me it worked because the 5000 line cpp file had it somewhere in there. It was right, and to this day, I have no idea how it came up with the correct answer.
There is no money in most FOSS. Maybe you’ll find somebody who’s interested in your project, but it’s extremely rare somebody latches on. At best, you both have your own unique, personal projects and they overlap. But sitting and waiting for somebody come along and having your project grind to halt is just not a thing if an AI can help write the stuff you’re not familiar with.
I know “AI bad” and I agree with the sentiment most of the time. But I’m personally okay with the contract of, I feed GitHub my FOSS code and GitHub will host my repo, run my actions, and host my content. I get the AI assistance to write more code. Repeat.
There’s a lot of false equivalence in this thread which seems to be a staple of this instance. I’m sure most people here have never used AI coding and I’m just getting ad-hominem “counterpoints”.
Nothing I said even close to saying AI is a full replacement for training junior devs.
The reality is, when you actually use an AI as a coding assistant there are strong similarities when training somebody who is new to coding. They’ll choose popular over best practices. When I get an AI assisted code segment, it feels similar to copypasted code from a stackoverflow. This is aside from the hallucinations.
But LLM operate on patterns, for better or for worse. If you want to generate something serious, that’s a bad idea. There’s a strong misconception that AI will build usable code for you. It probably won’t. It’s only good at snippets. But it does recognize patterns. Some of those patterns are tedious to write, and I’d argue feel even more tedious the more experienced you are in coding.
My most recent usage of AI was making some script that uses WinGet to setup a dev environment. Like I have a vague recollection of how to make a .cmd script with if branches, but not enough at the top of my head. So you can say “Generate a section here that checks if WinSDK is installed.” And it will. Looks fine, move on. The %errorlevel% code is all injected. Then say “add on a WinGet install if it’s not installed.” Then it does that. Then I have to repeat all that again for ninja, clang, and others. None of this is mission critical, but it’s a chore to write. It’ll even sprinkle some pretty CLI output text.
There is a strong misconception that AI are “smart” and programmers should be worried. That’s completely overselling what AI can do and probably intentionally by executives. They are at best assistant to coders. I can take a piece of JS code and ask AI to construct an SQL table creation query based on the code (or vice versa). It’s not difficult. Just tedious.
When working in teams, it’s not uncommon for me to create the first 5%-10% of a project and instruct others on the team to take that as input and scale the rest of the project (eg: design views, build test, build tables, etc).
There are clear parallels here. You need to recognize the limitations, but there is a lot of functionality they can provide as long as you understand what it can’t do. Read the comments of people who have actually sat down and used it and you’ll see we’ve the same conclusion.
You can’t turn an AI into a senior dev by mentoring it, however the fuck you’d imagine that process?
Never said any of this.
You can tell AI commands like “this is fine, but X is flawed. Use this page to read how the spec works.” And it’ll respond with the corrections. Or you can say “this would leak memory here”. And it’ll note it and make corrections. After about 4 to 5 checks you’ll actually have usable code.
We would have also accepted a bluer yellow.