Conversation
|
feel free to cherrypick my commit with the png optimizations here: lumiscosity@6fde3d0 |
|
ah, got it! |
|
Worth noting that a lot of the gains in CTR were found by re-creating the assets in lower resolution or colour depth. A lot of them had highly subtle gradients or compression artefacts that ballooned out the filesize. Fixing that is where a lot of the 90% savings came from. Automated compression like this is still valuable and free so we should absolutely do it, but I would also like the process to be at least written down somewhere so that if I come back to apply that type of manual tuning to the WiiU assets too, I can re-apply it. |
|
See #168. I forgot a lot of those CTR assets are inlined as data URLs in stylesheets.. We will fix that to be esbuild includes at some point but in the meantime manual updates are required to actually put the improved images in the file the client gets.. |
|
Would you still like to work on this? Portal and web need some love. I can do some manual work on it, but the automation is certainly nice. |
|
I would, though college is kicking me down a bit. I should have some progress on this by Saturday. |
|
Minor thing I noticed just by looking over some of the files, the image https://github.com/PretendoNetwork/juxtaposition/blob/dev/apps/juxtaposition-ui/webfiles/ctr/images/background.png is 256x256, but it's actually just a 2x2 pixel image scaled up to 256x256 How well does the 3DS support gradients? The following CSS results in an identical checkerboard background as when using the image, but it frees 437 bytes since the image is not longer being used, but I've only tested it on my real browser not on consoles. It should be using properties available at the time, but idk if Nintendo supported them in the browser used by Miiverse https://caniuse.com/?search=linear-gradient says that there should be supported in early webkit versions, but does note: Implements an earlier prefixed syntax as
Though it doesn't go into detail. So it's possible some tweaks might need to be made Edit: updated the snippet to be the 2010 version, seems to still work exactly the same on my browser but still needs console testing #body {
background-color: #EBEBEB;
background-image:
-webkit-gradient(linear, 0% 0%, 100% 100%,
color-stop(0%, #F3F3F3),
color-stop(25%, #F3F3F3),
color-stop(25%, transparent),
color-stop(75%, transparent),
color-stop(75%, #F3F3F3),
color-stop(100%, #F3F3F3)
),
-webkit-gradient(linear, 0% 0%, 100% 100%,
color-stop(0%, #F3F3F3),
color-stop(25%, #F3F3F3),
color-stop(25%, transparent),
color-stop(75%, transparent),
color-stop(75%, #F3F3F3),
color-stop(100%, #F3F3F3)
);
background-size: 100px 100px;
background-position: 0 0, 50px 50px;
} |
|
I also wonder if it might save some bytes by converting some simple images into font glyphs? Nintendo did this quite often with their UIs outside of Miiverse (unsure about Miiverse itself), they just slapped icons into the font as glyphs to reference them that way. Sprites like https://github.com/PretendoNetwork/juxtaposition/blob/dev/apps/juxtaposition-ui/webfiles/ctr/images/sprites/feeling-frustrated.png are a whole kb, I wonder if the combined size of these sprites would be smaller as a custom font? Though that might make the pages kinda janky, unsure. Just spitballing |
This isn't true, it has rounded corners on the darker squares (though it might be worthwhile to lose that design element if there's a big perf uplift) |
I had to massively zoom in on my phone to be able to see this detail, the rounding is pretty subtle and the color similarities made me entirely miss the roundness on my Mac even when I was viewing the image in gimp to color pick it I suspect it would be similarly unnoticeable on consoles, so I agree it might be worth removing that to save on the bytes/network request. It might be possible to emulate it via the gradient method or something else, but that would likely stretch what the 3DS is capable of That being said, this hinges on whether or not the 3DS can render these gradients at all. The properties I used were available in 2010 but it's possible Nintendo didn't support them still |
|
Copying @lumiscosity's comment on the other issue here for easier/further analysis:
Still have yet to figure out what magic @imgbot is doing, but I may need to take a closer look. It does seem to be somewhat different from just using each of the programs with standard/no extra arguments. Will play around with some GitHub Actions and see where it gets me. |
|
great research! so far the blogpost just has this copied in verbatim but i'll probably tack on your findings. good catch with the defluff thing, looks like odiff doesn't check for colorspace info for some reason??? confirmed with a hex editor that colorspace chunks get yeeted, whoops |
|
Something I just found. I had some concerns over using pngout, as the authors seem to be strongly against releasing the source code and I'm not sure what the licensing rights look like here (especially with the ports), which makes trying to make this reproducible a bit annoying legally and also the downloads are just from forums/a guys personal website (and ideally this workflow would be included somewhere in the project) A better alternative, in my opinion, would be zopfli. It's maintained by Google, open source under Apache 2.0, and hosted on GitHub, which eases all the concerns I had with pngout I have not done extensive testing on all images, but when I tested it on https://github.com/PretendoNetwork/juxtaposition/blob/dev/apps/juxtaposition-ui/webfiles/ctr/images/bandwidthalert.png I got nearly identical results to pngout: It's still 30 bytes larger than pngout from @lumiscosity's changes, but I also didn't test any of the other tools in that pipeline so maybe the real final result would match what's in @lumiscosity's changes (and I think an extra 30 bytes is worth the open source/reliable downloads to be honest). Samples:
|
|
From the website, this is the PNGOUT license:
I don't know how easily we can skirt as non-commercial given Pretendo takes donations and has some form of paid memberships. So it does look like automating with PNGOUT is a no-go. |
The bigger issue is that the license doesn't cover ports. I did see this on his website, but it only explictly covers the Windows versions. The website with the ports does not contain any license information that I can find, and the author also refuses to share the source code:
|
|
Using Using I think just using
To be clear, forever grateful to the work lumiscosity has been doing. It's great work and is genuinely helpful. I'm simply trying to find some possibly better alternatives for these tools that would work better in our workflow |
|
Sorry for the triple message. I ran a quick test to compare the compression results from lumiscosity@6fde3d0 to using
In most cases it looks like |
|
I added in Commands are just these, ran in a loop for all the files: This looks to be the way to go moving forward. The pipeline is simpler (only 2 commands, both only called once) and the other issues with the previous pipeline are basically nonexistent as these 2 tools are both properly open source, while basically have the same/better results.
|
it's all good, i'm happy to take any improvements here and add them to my workflow as well. we're all learning! |
|
New day, new tests. I think I've pushed optimization as far as it can reasonably go without getting into lossy compression. Instead of using purely Commands:
I reran my script to check the latest reductions for all images in the
As for shipping this pipeline, I thought it might be nice to create like a |
|
Here is a version of my script that processes all images, using mutliprocessing. It completes all images in about 20 minutes. I can add this as a separate, or push directly to this one, if you'd prefer. const path = require('path');
const fs = require('fs');
const child_process = require('child_process');
const cluster = require('cluster');
const os = require('os');
const baseDirectory = 'apps/juxtaposition-ui';
const filePaths = fs.readdirSync(baseDirectory, {
recursive: true
}).filter(filePath => filePath.endsWith('.png')).map(filePath => path.join(baseDirectory, filePath));
if (cluster.isPrimary) {
const numCPUs = os.cpus().length;
let completed = 0;
console.log(`Processing ${filePaths.length} files with ${numCPUs} workers...`);
const chunkSize = Math.ceil(filePaths.length / numCPUs);
for (let i = 0; i < numCPUs; i++) {
const worker = cluster.fork();
const chunk = filePaths.slice(i * chunkSize, (i + 1) * chunkSize);
worker.on('message', (message) => {
if (message.type === 'progress') {
completed++;
console.log(`[${completed}/${filePaths.length}] ${message.file}`);
}
if (completed === filePaths.length) {
process.exit(0);
}
});
worker.send({ chunk });
}
} else {
process.on('message', ({ chunk }) => {
for (const filePath of chunk) {
try {
child_process.execSync(`oxipng -o max --zopfli --zi 50 --ng --strip all ${filePath}`);
child_process.execSync(`ect -9 -strip --allfilters-b ${filePath}`);
} catch (error) {
console.error(`Error processing ${filePath}: ${error.message}`);
}
process.send({
type: 'progress',
file: filePath
});
}
process.disconnect();
});
} |
|
Tis I once again here for your daily notification. I made a PR for ECT to add a library API, so that we can make bindings for it in our pipeline rather than rawdogging CLI calls. PR is here for those interested fhanau/Efficient-Compression-Tool#152 |
|
This is something I did want to bring up. Is there a reason we went with PNG for non-transparent files? Would it be possible for us to make some of these assets into JPEG or something else that the Wii U and 3DS support? Everything else here is sweet; thank you both! I'd like to see if we can find a way to automate the workflow (whether with Jon's script or something else) and then, for this PR, if the closed-source tools give any more savings, I can manually add those savings to this PR with a separate commit and leave it at that. |
Does it need to be via DM? Iirc you're in our Discord server, could it not be asked there in an appropriate channel? Or if it's related to some of our services, as a "Question" issue on the relevant repository? Genuinely asking, since I don't really like to DM much (just in general) which is why I have them silenced on Discord and don't check message requests. I'm willing to if absolutely necessary but I'd like to not if possible (again this isn't a you thing, I just can't stand DMs 99% of the time)
Unsure, presumably it was a tradeoff between file size and image quality? I didn't make that decision, @ashquarky would have to answer that one and decide if they think JPEG is reasonable here. I will say though that having everything as PNG does make the pipeline simpler and we already have working tools for PNGs. Switching to JPEG would effectively start this whole process over again, trying to find suitable tools to optimize them |
Doesn't need to be a DM; but I figured you were probably the one that knew the most about it, and I didn't see an existing issue/question/etc for it anywhere (though I'm pretty sure it would apply to multiple services). I can ask it in one of those channels though. I completely understand the hesitance to DM, I'm similar when it comes to Discord in particular! Edit: I did the thing
True, but it seems like JPEG is already fairly well-compressed and we've already mentioned a tool to compress it further. As long as the JPEG is actually less bytes than the PNG I think that as long as the image quality is acceptable this would be a reasonable tradeoff. Most users won't care about anything but "it work!" at a fast rate (although I guess that statement could be in favor either way)
I'd appreciate this being pushed directly (or, alternatively, made as an independent repository for general use?) |
|
Just made a commit with compressed images! @lumiscosity's work added a single additional byte of compression 🙏 Will amend and verify and add co-authors and etc in a bit. |
I'm curious which one had the extra byte? In my test all images were smaller using the new oxipng/ect pipeline |
|
Google has some recs on JPEG compression settings: https://developers.google.com/speed/docs/insights/OptimizeImages |
Ah, sorry for the misunderstanding. Your pipeline was smaller, which is why I ran it first. I then did Lumi's pipeline on top. None of the files were changed at all in Lumi's pipeline, except one Also, yes it's been more than a bit, sorry. I'm on a roadtrip right now and God I've been so tired. "in a second", she said. Anyway, it's been 3 hours. Here's the single file I was referring to: I don't know what file this is for sure, but I feel very comfortable assuming it's But I was actually wrong about it being only a single byte! The And DeflOpt got a few extra single bytes in these files too too: |
This commit was built atop investigation and research from @lumiscosity and @jonbarrow. As such, I have made them co-authors of this commit to give them their credit. Further discussion in issue PretendoNetwork#262 and especially pull request PretendoNetwork#266. Co-authored-by: lumiscosity <averyrudelphe@gmail.com> Co-authored-by: ImgBotApp <ImgBotHelp@gmail.com> Co-authored-by: Jonathan Barrow <jonbarrow1998@gmail.com> Co-authored-by: Sienna "suprstarrd" M. <business@suprstarrd.com> Signed-off-by: ImgBotApp <ImgBotHelp@gmail.com> Signed-off-by: Sienna "suprstarrd" M. <business@suprstarrd.com>
|
Heads up that I may force-push again if either of you want the Besides that irritation, the only remaining thing to figure out with this pull request is to figure out automating image optimization for the future. I am incredibly tired at the moment, but this will be something I prioritize tomorrow after I catch up on some more college work. With that said, this is in a good enough state for review / merging if that's something you all want to do right now and skip the automation. Cheers. |
This commit was built atop investigation and research from @lumiscosity and @jonbarrow. As such, I have made them co-authors of this commit to give them their credit. Further discussion in issue PretendoNetwork#262 and especially pull request PretendoNetwork#266. Co-authored-by: Sienna "suprstarrd" M. <business@suprstarrd.com> Co-authored-by: lumiscosity <averyrudelphe@gmail.com> Co-authored-by: Jonathan Barrow <jonbarrow1998@gmail.com> Co-authored-by: ImgBotApp <ImgBotHelp@gmail.com> Signed-off-by: Sienna "suprstarrd" M. <business@suprstarrd.com> Signed-off-by: lumiscosity <averyrudelphe@gmail.com> Signed-off-by: ImgBotApp <ImgBotHelp@gmail.com>
Given how small of a difference defluff seems to make (only between 1 to a few bytes per image), for our needs I don't think it's worth the hassle of trying to include it in any automation due to the platform/licensing concerns even if it does technically improve things. The oxipng+ect pipeline gets things "good enough" while fitting within our needs. I tried using advdef (from https://github.com/amadvance/advancecomp) on |
|
@suprstarrd Just to confirm you want me to review this as-is and then automation will be a seperate thing? Or should I leave you to cook on the automation front |
|
Is yes a valid answer? (Review as-is - i.e. did I miss anything and do the images load properly; can't check on 3DS/Wii U rn. Then I'll cook on automation)
I agree. Wasn't going to make it anything beyond that, just need to actually implement it is all. |
|
Quick test on a random community in modern Chrome. Definitely some nice free improvement, with Portal being the biggest winner. Note that this was a rough test with debug and similar enabled, so we could expect a bigger % difference in prod, too. |
ashquarky
left a comment
There was a problem hiding this comment.
Checked Portal and CTR, everything renders fine
binaryoverload
left a comment
There was a problem hiding this comment.
I know the automation is a WIP, but could we have it documented somewhere what the compression process was that was used in the end, please?
|
Sorry for the merge conflict ^^; Since you're re-running the script anyway, I did a bunch of other assets changes I had pending: https://github.com/PretendoNetwork/juxtaposition/tree/work/design-assets You can rebase onto that branch or cherry-pick or merge it or whatever you'd like, works for me. (Usually we'd be a bit upset about a rebase during review, since you lose the "Viewed" status on the files on GitHub, but since this is all binary files rather than actual code, losing the diff doesn't make a difference) |
|
For any future optimisers, since this issue is becoming a bit of a hub for that info; my process for design assets in GNU IMP is:
Since the indexed colour part of the process removes information (just usually too subtly to be visible) and requires some human judgement on the alpha channel point, it appears automated tools like oxipng don't apply that transform. Instead of storing better, it's storing less. Of course this still stacks with storing better, so I'll be excited to see what the script does with these :3 |
Man, if only someone was writing a blog post about this
Yes! |
|
I'll finish this one up ^^ |
This commit was built atop investigation and research from @lumiscosity and @jonbarrow. As such, I have made them co-authors of this commit to give them their credit. Further discussion in issue PretendoNetwork#262 and especially pull request PretendoNetwork#266. Co-authored-by: Sienna "suprstarrd" M. <business@suprstarrd.com> Co-authored-by: lumiscosity <averyrudelphe@gmail.com> Co-authored-by: Jonathan Barrow <jonbarrow1998@gmail.com> Co-authored-by: ImgBotApp <ImgBotHelp@gmail.com> Signed-off-by: Sienna "suprstarrd" M. <business@suprstarrd.com> Signed-off-by: lumiscosity <averyrudelphe@gmail.com> Signed-off-by: ImgBotApp <ImgBotHelp@gmail.com>
|
Wasn't able to quite match @jonbarrow's results despite running the same command - probably an OS thing with different versions of libpng and zlib - but still quite a good improvement and we can always have another PR with a second pass once others are more available to do so. Dropped the script I used in as well, basically the same as Jon's just as quick-and-dirty shell so nobody gets any ideas about this being ready to automate, aha. Still need to look at SVGs. |
will get around to it later this/next week, hopefully. i haven't been able to get ECT running on my pc, but maybe it just needs a bit more fiddling, then i'll round up the info from here, run a few more tests and let it loose |
|
@ashquarky @suprstarrd What's the status of this? Is this ready to be merged? |
|
I need to final test this to make sure the applets can load the images and just haven't gotten around to it |



Resolves #262. If I remember correctly, there's a few other related issues we can revisit here too.
Changes:
This is a pull request that optimizes the images Juxt offers as best as possible.
Here's what @imgbot was able to do:
Details
Using and integrating @lumiscosity's work can additionally grant larger savings.