Have source image and a blank image

> Have source image and a blank image
> Draw an shape (in my case, an circle) at random point with random color from source image.
> If your new image closer to source image, continue, if not revert back to lastest image.
> Run for a while, post result.

Here is my result after running 1 hour, cant say I am happy with it, but I dont know where to improve.

Attached: 1hour.png (1418x1045, 1.26M)

Other urls found in this thread:

github.com/AUTOMATIC1111/randdraw
rednuht.org/genetic_cars_2/
twitter.com/SFWRedditImages

Let me run my implementation on this, hold on.

What's the point?

Do your own homework.

Mio is not a nigger!

That's all you have after an hour? Your program is extremely slow if it's only doing a few million iterations in an hour. You might be able to make out faster if you filter quantize the colors so you don't spend time trying to match the color that only occurs in a single pixel.

We had this thread a few months ago, think of something original.

Black anime girls are cute!
C U T E !

Here's what I have after a million iterations.

Attached: res.png (672x1008, 645K)

Here's 10 million. That's with radius of 8. Think that's too much for any details to emerge.

Attached: res.png (672x1008, 565K)

how about this f-am ?

Attached: 1498672511390.png (1920x1080, 1.29M)

Here's 1m circles and 10m lines. I like this one.

Attached: res1+10m.png (672x1008, 1.04M)

What have you guys written these in? Feel like it would be very hard in something like C, but maybe I'm just a brainlet, is it easier in Java or something? I feel like the compare function would be the hardest?

Maybe because of console output, but I notice that the image quality is not improve much after 30 minutes.

I am planning to put all the unique color from the source to an array and random from that array instead of just grab an random pixel from source and get color from it.

python, C, etc. Alot of libraries allow this kind of stuff.

I have written my in english

I am using OpenCV to do this, dont know about other.

I just compute the Euler distance of old and new image to the source and conclusion.

actually in c/c++ with opencv it isnt that big of an act

Your pseudo code pls?

You should measure it in iterations, not minutes. How many improvements you attempted. I honestly feel looking at your results that you have an error in the function that computes differences between pictures.

I wrote it in C++ without imaging libraries, but I use imagemagick executable to convert from BPM to other formats.

github.com/AUTOMATIC1111/randdraw

Compare function is the easiest to write efficiently in C (by write I mean write from scratch, not use a ready function from a library).

Attached: (you).png (961x545, 945K)

probably imagemagick

> Have source image and a blank image
> Draw an shape (in my case, an circle) at random point with random color from source image.
> If your new image closer to source image, continue, if not revert back to lastest image.
> Repeat a million times
> Draw an shape (in my case, a line) at random point with random color from source image.
> If your new image closer to source image, continue, if not revert back to lastest image.
> Repeat 10 million times
> profit

This thread is gay.
Someone make frame thread and show your FRAMES!

Looks like a painting, nice.

but how I'll compare 1 million images in timely manner? I mean, I do have good eyesight, but it's too much...

Its fairly simple

Function to get a RandomPoint() and Function to get a RandomRGB(). That's fairly easy. Then GetPixelColorRGB from point and compare the r, g, b. Make sure the color variance from the RandomRGB is close enough +- 25 for static or you can just accept the random RGB and if you happen to get a another same point but with better RGB, you can always accept the new color.

Use a computer.

Guys, why not include in your computation both a penalty for more shapes and a positive score for being closer to the source image?
I feel like that would generate much more interesting images since the algorithm would try to match its shapes to closely follow the original image instead of just throwing around random stuff.

>Compare function is the easiest to write efficiently in C
Comparing pixel by pixel from a BMP or comparing compressed ones? I'm thinking about making an image doublet finder, so this would be good know. To convert every image to a BMP first would be to resource heavy I would imagine though.

>a penalty for more shapes
what
how do you imagine it working

Comparing compressed ones is meaningless... I mean... How do you even imagine it working? Two byte streams with different lengths and completely different content.

is it possible to do this in assembly?

Of course. Why not.

That seems reasonable for the paint function. For the compare function you do the same but for all points?

Definitely get am stay of the colors. Easiest way is a histogram. There's 256 cubed possible colors so just make an array for each and increment the index for each color in the image. Then just extract the colors from the non zero indices.

One example:
> Have source image and a blank image
> Draw an shape (in my case, an circle) at random point with random color from source image.
> If your new image closer to source image __by at least some amount__, continue, if not revert back to lastest image.
> Run for a while, post result.

This way when you increase the acceptance threshold your result will be one that matches the original image just as well, but using less shapes. I fell that the results will look more interesting that way.

>Not doing it with creative shapes
>No dick drawings
Jow Forums failed me

Yeah, I guess that doesn't make sense. So every image compare tool converts them to BMP(or similar) "under the hood", or do they just compare a few specific points and if they match say that they are the same?

There's plenty of room for improvement. You could at least compare only the rectangle where you pained something instead of the whole image (since everywhere else is guaranteed to not change). In my implementation I go even further and calculate the difference only for pixels I changed while paining. The pic on the left is 9 million iterations drawing long lines and it finishes in 8 seconds.

>> If your new image closer to source image __by at least some amount__, continue, if not revert back to lastest image.
That's how it is supposed to be done. That's the same thing as OP says.

Obviously all tools get uncompressed raw pixels before they do any operations.

Attached: y3.png (800x859, 1.18M)

We've done this like fifty times already. Why do you guys keep doing this as if you wrote something new?

>The pic on the left is 9 million iterations drawing long lines and it finishes in 8 seconds.
It looks really good. What was the original?

Original. I like how it tries to paint initially black background with white, but since at edges there are less total attempts you can see skin color, which is also abundant in the source picture (since it's way closer to desired white than black).

Attached: x.png (800x859, 566K)

It looks painted with your program, cool effect.

3 million iterations, for comparison.

He did something new for himself. It's fine, user. It's a fun topic to talk about.

Attached: y.png (800x859, 1.33M)

this looks the best atleast thats what my dick says

is your dick nearsighted

The difference is that in order by a new shape to be accepted it should not just make the image closer to the original, it should make the image X pixels closer, where X is some parameter to be supplied.
If X is 1 would make be the same as it is now, but if it is more than that the result will be images that use less shapes to archive the same closeness.

its just something about the filter that makes it hot.
dont judge me pls, theres way more weird fetishes out there

I don't think this will have the desired effect. Let's try it. Here's the girl with 1 million circles and no threshold.

Attached: y-circles.png (800x859, 599K)

Here's with a threshold of 1000.

Attached: y-circles-i1000.png (800x859, 605K)

5000

Attached: y-circles-i5000.png (800x859, 659K)

10000. Adding a restriction makes it worse. I think allowing edits that make target picture look LESS like original, but only by small amount, will produce better results.

Attached: y-circles-i10000.png (800x859, 676K)

I dont there are error in compute function, since it still out put the rough shape. It just run horribly slow.

Here's a variation with allowing worse edits.

How many iterations is your pic in OP?

Attached: y-circles-im.png (800x859, 598K)

Forgot the code:

int eulerDistance(Mat source, Mat des)
{
int result = 0;
for (int x = 0; x < source.cols; x++)
{
for (int y = 0; y < source.rows; y++)
{
Vec3b sPoint = source.at(x, y);
Vec3b dPoint = des.at(x, y);
int r = pow(sPoint[0] - dPoint[0],2);
int g= pow(sPoint[1] - dPoint[1], 2);
int b = pow(sPoint[2] - dPoint[3], 2);
result = result + sqrt(r + g + b);
}
}
/*cout

1000 pixels? That can't be it, no single circle can make the image 1000 pixels closer to the original.
Also, to make the comparison fair count only accepted iterations that produced accepted shapes.
Not sure what is the best way to go about it, but you get the idea. Basically to try to make the algorithm place the shapes in such a way that it more closely matches the original, instead of just randomly.

The extreme of such an algorithm would be an 'automatic vectorizer', you feed it an image and it tells you the strokes used to paint it.

Main loop:
Mat trans(height, width, CV_8UC3, Scalar(0, 0, 0));
Mat temp;
for (int i=0;i width-20) drawPoint.x = width-20;
if (drawPoint.y < 20) drawPoint.y = 20;
if (drawPoint.y > height-20) drawPoint.y = height-20;

Rect rec(drawPoint,Size(20,20));

int disTrans = eulerDistance(source(rec), trans(rec));
int disTemp = eulerDistance(source(rec), temp(rec));

if (disTemp

I measure distance as sum of abs(r1-r2)+abs(g1-g2)+abs(b1-b2) for each pixel where r1...b2 are integer from 0 to 255.

>Also, to make the comparison fair count only accepted iterations that produced accepted shapes.
Yeah, that's not making it fair, that's skewering it in favor of your approach.

>an shape

I guess you're right. Did 40k iterations with circles, looks very similar to yours.

Attached: res.png (672x1008, 786K)

Oh, stupid me, the rectangular where I limit pixel compare is wrong, it only contain 1/4 of the circle.

Depends on the objetive. If you want to make it run faster, that would be skewing.
If you want to get the best possible image given a number of shapes, this change makes sense.
As it stands, the image generated using my approach will almost always be worse since it will always have less shapes to it.

I did this a while ago.

Attached: rodney24x.webm (472x354, 2.12M)

dot*

I don't have time to look at this right now, but OP I just want you to know I've bookmarked your stuff and that you've inspired me to explore graphical programming.

begin with bigger radius of circles and slowly tune it down

Well, it will look worse, because you're rejecting good edits. I don't want to make it go faster, I want to evaluate the quality of the result after a fixed amount of work. With your suggestion, requiring to reach a certain number of accepted edits, you can actually easily get into infinite loop, since after a certain amount of edits with high improvement threshold, no more edits would theoretically be able to be accepted.

y-you too

Attached: a.webm (548x516, 1.47M)

>after a fixed amount of work
"amount of work" is computation time, so you are in fact trying to make it run faster.
Yes, you could actually make an infinite loop but only if your threshold is way too high. In that case you could simply cap the amount of failed edits before giving up.

Or, much more interesting: At each step generate X images and choose only the best one to carry on, instead of choosing the first "good" one.
That will dramatically increase computation time, but will also use less shapes for sure.

Did someone program this is Python? If so, post source please!

Filthy Matlab peasant

I find it funny that you want to use less shapes, but also want to only count accepted shapes, resulting in, in fact, the same amount of shapes.

>python programmer calling matlab programmers peassants
I... what..?

Attached: a.webm (548x516, 1.45M)

But .. it C++.

>the same amount of shapes
For a closer result to the original image.
The point is better quality as measured by closeness/number_of_shapes

Fixed the compare rectangular, already got better result than 1 hour run in 5 minute.

200k integration.

Attached: 5minute.png (1419x1043, 1.27M)

Told you your comparison was broken.

Shapes keep overlapping so you won't see the amount of them anyway.

So as I predicted, it's extremely easy to get into an infinite loop. After a lot of trial and error, here is a picture with 20k accepted edits, only allowing edits closer than 1000 to original...

Attached: y-circles-20k+.png (800x859, 629K)

>it's that thread again

10M tries.
Mine runs at decent speed but the ram usage is retarded, like 200MB for this 2000x1000 image.
I'll blame it on java.

Attached: soyher.png (1101x540, 1.43M)

And here is 20k accepted edits without threshold. You may go ahead and say that the pic above looks better, but took many times more iterations to build. And it doesn't look like it uses less shapes at all.

Attached: y-circles-20k.png (800x859, 670K)

I don't think it makes much sense to compare two drawing like that since the shapes are drawn at random places. Maybe you just got luckier when the threshold was set.

How much memory does you uses?

And here is accepting any improvement, but with 500k iterations. It takes the same amount of time as 20k accepted edits with threshold enabled, so I consider them to be completed with the same amount of work completed. Is one an improvement over another? Hardly. Is one a pin in the ass to work with because it keeps falling into infinite loop? Yes.

No. Luck has nothing to do with this.

Six megabytes for the pic related.

Forgot my pic.

And, if you really want to prove your point, get off your ass and write your own implementation.

Attached: y-circles-xxx.png (800x859, 630K)

Posting a couple from the first time we did this.
Started with this image...

Attached: 1453577897427.jpg (1920x1200, 1008K)

Here it is run through my implementation

Attached: xp.jpg 1330800 iterations.png (1920x1200, 818K)

Another user ran it through their line version

Attached: xp lines 10M iterations.png (1920x1200, 2.96M)

it does. 500K tries it's gonna take a fixed (more or less) amount of time.
20K accepted edits it's gonna be dependant on how the random circles get picked. Obviously the probability of getting many consecutive not accepted circles is low but there's a slim chance it might run for much longer.

And by far my favourite of them, they took my dots result and ran it through the line version.

Attached: dots+10M iterations lines.jpg (1920x1200, 1.97M)

Here's the initial image with lines, after 25 million iterations.
At this point, it's well past the point of diminishing returns, and looks less interesting than the previous ones.

Attached: xp lines 25M iterations.png (1920x1200, 3.04M)

It is almost guaranteed to hang after a certain amount of edits. If the distance between all black and target picture is 1 million and you require your each edit to be 1000 closer to original, then your program is guaranteed to hang if you require more than 1000 edits.

And, no, luck has nothing to do resulting pictures. Results are extremely reproducible, run the program again, and the pic won't be exactly the same, but it will look extremely similar, and you'd know this if you were completed the challenge yourself instead of armchairing.

While it is true that this has been done before, I still see merit in pursuing this further.
More intelligent, deterministic methods could get really cool results in a fraction of the time, and layering effects in different ways like is certainly worth a shot.

I keep forgetting them pictures.

Attached: bliss.png (1920x1200, 1.54M)

If the thread is still up after I sleep, I'm definitely doing this.

Image is from last year.
I was drawing random 1-50px ellipses and compared the histogram of the entire image image after each iteration (instead of just the changed area).
I'm surprised that this actually worked.

Attached: output0500000.jpg (1920x1080, 182K)

By histogram do you mean the thing that does not change if you rearrange the pixels randomly? I don't think that can work.

Is there a specific name for this kind of algorithm?

Don't think so. I call my program randdraw. A well-known sorting algorithm that works in a way similar is called bogosort. Or, well, some other names too.

> In computer science, bogosort[1][2] (also permutation sort, stupid sort,[3] slowsort,[4] shotgun sort or monkey sort) is a highly ineffective sorting function based on the generate and test paradigm.

Oh this thing again. Did a multithreaded version in c using only pthread library that does 200 million 10x10 boxes in 30 seconds on 2500k. Try beating that speed

Attached: outt.png (1920x1080, 403K)

Generative evolution or genetic evolution or something like that. However thats probably more complex than some of the solutions here.

There are various application to this sort of thing.

rednuht.org/genetic_cars_2/

How do you compare the two images?
Can someone explain or show code?
C++ preferred