Who was in the wrong here?

Who was in the wrong here?

Attached: Screen Shot 2018-04-24 at 12.52.13 AM.png (1640x366, 118K)

Everyone not using BCHS.

>node queers making fun of anybody

Attached: justed.png (753x503, 562K)

You, for being some reddit faggot
Now go back

>"Hackathon"
>all webshits
I sympathize with the php guy though

The JavaScript queers. At least PHP kind of looks like c to me and I enjoy writing it

Attached: lrg.jpg (500x661, 114K)

All of them for using webshit and meme databases.

>JS cancer making fun of PHP retards
Just a typical case of pot meet kettle

The redditor for being such a thin skinned faggot

dammit

so

seems to be working

The same shitty community behind PHP in the past is now on the Node train today, that's why PHP has improved a lot since ver. 5.

>getting ur feels hurt
wow the absolute state of php shit streakers

Attached: 1518141959966.jpg (250x250, 5K)

Sure it's easy to shit on someone for using some language. I as well pretty much shit on most people using javascript. However, I do acknowledge that (almost) every programming language has a task it is good at or at least some merit for the developer. Making fun of someone because they use certain tools is just dumb.
Of course the general rule applies that most shit written in PHP and JS is shit, but that's to be judged by the code itself not the language.
And really js faggots are in no place to shit on others.
>Hurr durr makes the web run
My ass.

Attached: gno.png (1000x669, 950K)

>I'm better because I use ruby on rails
kys weeb

The PHP lad, those cock-sucking metrosexual JS hipsters/parasites couldn't make an ip logger if their life depended on it.

anyone coding in a weakly typed language

Attached: programming.jpg (593x640, 100K)

Bullying is NEVER okay. The Hackathon organizers need to answer for creating this toxic atmosphere.

web programming is pretty gay man

a functional end product would've shut their pieholes.

>Who was in the wrong here?
Basedgrammers jerking off over whatever tool-of-the-month is in right now.

What should I be using instead of MongoDB?

Attached: 1508021105014.jpg (307x462, 13K)

PostgreSQL

Why?

Attached: Capture.png (1235x306, 29K)

That's very ...poetic... but what are the actual technical issues?

Postgresql is just better in everyway, from initalisation to storing data.

MongoDB is for people who think they need NoSQL and have never done any real administration in their life.

PostgreSQL is a 30-year-old database with 2018 technology.
MongoDB is a 10-year-old database with 1968 technology.

Okay, thanks. That makes sense.

Suppose I have a few hundred long lists of strings (hundreds of thousands of strings per list), and I want to find the set intersection of two or more of these lists.

What database would be best suited for this sort of list storage and intersection? Is Postgresql still the best option for this particular use case?

Attached: 1510501514704.jpg (480x352, 28K)

Possibly Cassandra.

Cassandra for something that can be solved with a shell or Python script?!

It depends on the other requirements. Do you want multi-user access, do you want to update this data, etc.?

If all you want is query this data yourself, looks like text files and a small Python or shell script would be the best solution.

Everyone for not using avisynth.

He was asking for a database. But you're probably right... hundreds of thousands of strings aren't much, you could definitely do this with csv and a python script.

Although I figure I'd throw Apache Spark at the .csv anyhow, just in case the problem grows.

To further explain. Using the Unix shell:

1. Have one file per list
2. Use sed if required to make sure you have one string per line in each file (e.g. to replace space by newline)
3. Use sort to sort
4. Use comm to find common lines (i.e. strings) between two sorted files (lists)

I would throw a shell script at the data, just in case the problem does not grow :)

Adding and removing from the lists is essential. Parallel queries are important. Additions and removals can probably be queued and performed when no queries are running; parallel edits aren't important. Immutable strings are fine since I can just delete and re-add a string.

>text files and a small Python or shell script would be the best solution.
This is the approach I'm using currently, loading the lists in to memory as set objects (mutable hashtables) and using a std lib intersection function on them, but I'm looking for additional performance.

>@65669673
(You) were in the wrong for not giving that person gold for their mental troubles.

>1. Have one file per list
>2. Use sed if required to make sure you have one string per line in each file (e.g. to replace space by newline)
>3. Use sort to sort
>4. Use comm to find common lines (i.e. strings) between two sorted files (lists)
Very early on in the project I was doing this, but I quickly abandoned it because removing a line from the middle of a list meant writing out the file again. I suppose I could have used ed to knock out lines from the middle of the file.... ..but I decided that having all the lists loaded into my application's memory was a reasonable solution.

Hardcode them as variables. Also make them global for ezy access.

ebin :-DDDDD

Eh, but that's not nearly as fun... never mind it's not generally much easier.

Each string is row and column per list two do ihteraection just select where both columns are set to true.

Number of columns could grow substantially in the future though, and my impression is that the more columns you add the worse your performance could get. I just checked and I currently have aprox 1400 lists. Reasonably this could double or triple in the next year, and beyond that I can't really predict.