Rate my database design

I've designed a rather elegant inventory database to keep track of all my stuff and where it's stored, because I'm tired of constantly losing things. I'm rather proud of my work, so I thought I'd share it here. My database includes a Master Table that lists all items, and then has a foreign key into a separate table for each item category (basically whichever foreign key is non-null will be the one you use). In the diagram FK stands for "foreign key". There's also a foreign key into a substructure table, which refers to e.g. a particular shelf on a bookcase, and that table points to a superstructure table, which is for e.g. the entire bookcase.

I will write a web interface for the database using HTML, Javascript, PHP, and MySQLi so I don't have to issue the SQL commands manually. It will have a layered stack structure as shown on the right. HTML form --> Javascript --> PHP --> MySQLi --> MySQL database. The job of the Javascript frontend is to (1) validate the form data, (2) build a SQL query based on the form data (which it will store as the value of a hidden form element), and (3) submit the form to the PHP backend. The job of the PHP backend is to (1) make any necessary modifications to the SQL query based on the contents of the database, (2) connect to the database, (3) submit the query and get the result via the MySQLi interface, and (4) output an HTML table displaying the output of the query.

I've also designed this database in a normalized fashion, so that it is in second normal form. Am I a database genius or what?

Attached: inventory-db.jpg (1200x900, 310K)

your tables are not atomic.

You have won a prize.

You didn't need MySql for this kind of project.

But I want to use MySQL. It's just what I'm comfortable with.

>PHP
why tho?

You will still lose things, a database won't help you at all.

It'll still be fun to design and build.

Because it's IMO the best web backend language in existence, and I don't care if anyone else disagrees with that statement.

you are so fucking retarded. stop planning and just build it and see what happens. you are way to inexperienced and unknowledgeable to gain anything from this needless contemplation

Why do you need a master table? Why not just have a view?

To get a list of everything, couldn’t you just have multiple queries that selects stuff from your other tables?

Also why is this master table called a master table? Shouldn’t the queries insert into their respective tables? Having a master table implies that a book would be inserted into the master table first and then inserted into the book table when it should be vice verse. This implication is also shown in your drawing where it looks like everything goes to the master table first.

You could use triggers to auto update anything from the main tables to the master table.

The master table would include only a primary key, the foreign keys, and the quantity for each item. The individual tables would include information specific to each category (e.g. author and publisher for books) and would be linked to the Master Table via table joins. I don't exactly remember how to do table joins in MySQL, but I'm sure the foreign key arrangement I have would be conducive to that.

when the bait is so good you're not even sure any more

It is either poorly designed or poorly written .
Can't tell
Also
>HTML, Javascript, PHP, and MySQLi
Smells sopa de macaco

Actually rather than doing joins or using foreign keys, a superior design would be to add Mongo to your stack and have the mysql database contain hashes rather than foreign keys, those hashes will allow you to look up an object in mongo, and the object will have the necessary data. Of course, the object will also have references back to the database too, in the form of table/id pairs, where you'll look up the ID in the specified table, then you can find appropriate data along with any hashes for other objects saved to mongo. Sounds technical but is actually quite simple and will save you a lot of trouble

Looks good bro. You're going to kill it.

this is obvious bait.

what are the columns in the "master table"? seems absolutely redundant. just store quantities on the items tin the other tables and use SQL 'counts' and 'wheres' to count the items in each. Please read a book on relational databases or take a class and come back. you clearly are uneducated in databases.

so you're proposing an html form where you override the default HTML form behavior to use javascript to send the form data to a PHP server endpoint, that then runs a query on your database, returns it to the PHP server, which in turn returns that to the open connection from the javascript request (as json probably), then you use more javascript to update elements on the page?

makes sense I guess. Your whole database layout is fucked though.

Not terrible design. You can do a single query against master for all items or a particular item, then use FK lookups against category tables.
Substructure / superstructure is clunky. How about Superstructure / substructure fields in Master, where the Super field is FK to a lookup table.
Also, sqlite is perfect for this type of thing. Put it on your own web server and you can GUI via web or cmdline via ssh connection.

do not listen to this luddite.

The benefit of Master table is that you can pull a lookup list of all items with a single query. The book / cd / whatever title, description, etc are in master.
Without Master, to get a full item list (i.e. for your web interface) would require a UNION SELECT ALL or something similar against every table every time. This is inefficient.

>Sounds "technical"
That's a funny way to spell "retarded"

its more inefficient having redundant data and an extra table.

it's called a view you fucking retard

Not necessarily, the data don't have to be redundant. i think what the OP is going for is that some data is the same for every item in Master. ex Name, Description, Location...
Then data that is different for different categories is linked in the category subtable via a FK join. This isn't too uncommon of a design when your data is somewhat similar but with different details.

How do you think a view works exactly dipshit? Is there a magic view fairy in the database engine or is the view composed of SQL statements that are executed when the view is accessed? Without a master table the view would be built via UNION SELECT or something similar.
After your meth high ends pick up a book.

That's not how you design databases. Look up ERD.

>The book / cd / whatever title, description, etc are in master.
vs.
>The master table would include only a primary key, the foreign keys, and the quantity for each item.
does not compute.

Views can be memory-backed/cached. UNION SELECTing n tables performs better than LEFT JOINing n+1 tables.

It literally isn't though retard and your ERD is basically non existent.
Also as someone else mentioned, mysql is overkill. sqlite3 can do this job easily.

>mysql is overkill. sqlite3 can do this job easily.
even a csv file would probably be good enough. otoh, i'm managing my 5 mail accounts and ~20 aliases in postgres, because it's what i'm used to.

I wouldn't trust implementing all the same levels of experiences and filesystem knowledge that sqlite3 employs to be safe from multiple readers and writers and also atomic writes.

I personally wouldn't use files for anything other than configs or very large, application specific stuff.

this is peak autism or comedic genius

either way hilarious

>UNION SELECTing n tables performs better than LEFT JOINing n+1 tables
True, but n+1 joins in this schema are mainly needed to build a full list with ALL fields for ALL Master items. This is pretty rare if you design right.
Master lets you build a quick list from one table that can be selected for viewing / editing, something like...
1 BOOK Feminism and user, critical theory
2 CLOTHING Lion King man thong
3 DVD Fisting Fireman 7
Now you can select one of these and pull details in a 1-1 table join (e.x. Master -- Books)

just use jsonb column sempai

Attached: 1557717391125.png (535x409, 31K)

sqlite is great for multiple concurrent reads. It technically locks up for writes, but in any non-retarded setup you OPEN the db, update / append, then CLOSE db. This takes milliseconds.
The sqlite.org website is built on a sqlite backend and it has tens of thousands of hits daily.

The master table makes no sense

Why can't you just have one table and a field in that table that tells you the type of object it is?

if you already know what table you want to fetch from in most cases (i.e. by parsing UPPERCASE KEYWORDS), then this becomes a no-brainer. An index scan on a single table will always be faster than an index scan plus a join, especially if the index in the latter case covers around five times as many rows.

Not OP, but short answer is that clothing doesn't need a Publisher field. Master is a title and like you said an object type. Any column-data that is specific to a category can go into a subtable field. user above mentioned a JSON column which is another possible solution.
There is another schema type that would use a foreign table of key-value pairs such that multiple rows describe an object, but the queries tend to get messy.

>opening/closing the db connection for every query
>non-retarded
wat?

so you're just gonna keep it open between loading the page and you submitting whatever form on the webpage? Retard.

>An index scan on a single table will always be faster...
You've kinda made my point. One of the most common queries for the gui will be a simple list of all items for further selection of single items. Master table lets you query and sort that list quickly from a single table.
And in a well designed gui you may not need a join at all to pull your subtable data. You already have the Master data in the list-query so you just pull the subtable item via the FK. There's never a perfect system compromises always occur.

you can just leave nulls for the fields that aren't applicable. in either situation you are writing queries specific to the type of object you are querying. but having one table is a much more simple design and wouldn't require unioned queries to multiple tables or the dumb master table that doesn't do anything meaningful beyond complicate things unnecessarily

You may be confused. Best way is something like,
prepare query1, prepare query2...
OPEN db
exec query1, query2...
CLOSE db
The db only opens when everything is ready to go, and closes immediately after.

>you can just leave nulls for the fields that aren't applicable
eeek, this is sorta true, in the same sense that I don't really need a toilet if I just shit on the kitchen floor. It isn't elegant, and violates basic db table design principles. A table should only contain fields that apply to the object represented in that table.

of course, there's no reason not to. once you leave the "single user, runs on a raspberry" tier and have actual load on your server, your application will keep multiple db connections open and re-use them for incoming requests. it's called a connection pool and has been common practice for decades.

You really need to do this properly in UML or crows foot, it's confusing at the moment and looks more like django model inheritance or something

Having a single point of entry for all queries just seems odd, you'd be better off doing something similar with views.

Have you considered using a framework?

>second normal form
Into the trash

You're talking about different abstraction-levels. From the client perspective it still looks like "opening" a connection and "closing" it when done.
From the server perspective you have a pool of open connections to allocate to incoming requests. But for applications with high concurrent reads / low writes connection pooling may be overkill.

it is not uml/er conform. that would increase readability

i guess your part about OPEN/CLOSE is describing the storage backend, not the query interface. not sure how sqlite handles that, but how would that work for transactions? the whole db locks up until you COMMIT or ROLLBACK?

if you are indeed talking about the query interface, see

bedtime, gnite Jow Forums. Have fun with your project OP.

>how would that work for transactions
Wrap all update / appends in transaction, like...
prepare query1, prepare query2...
OPEN db
TRANSACTION
exec query1, query2...
COMMIT
CLOSE db
>whole db locks up until you COMMIT or ROLLBACK?
Yes that's the whole point of transactions. But the db lock is literally just a few milliseconds. You could easily handle tens to hundreds of writes per minute because they are unlikely to hit the db at the same time.

>From the client perspective it still looks like "opening" a connection and "closing" it when done.
obviously, because the client is talking http, which is connection per request - or connection per multi-request if you're talking about http2.

>for applications with high concurrent reads
>connection pooling may be overkill.
again, wat?

Reads are easy, writes are hard. Writes have to worry about COMMIT / ROLLBACK, triggers, index updating, view updating etc. Reads just SELECT and that's it. Reads are much lower processing load on db server.
Above assumes of course that your idiot developer doesn't do shit like unindexed queries, or my personal favorite when sanjay has a list of items that are submitted as
OPEN SELECT ITEM 1 CLOSE
OPEN SELECT ITEM 2 CLOSE
OPEN SELECT ITEM N CLOSE
And so on in that fashion.

you are thinking of your tables as things that need to be treated very differently, when they aren't. they are just items with a few pieces of identifying information that can be thought of pretty much as a generic item being inventoried. by breaking each out to a separate table you are over engineering this and making it more complicated than it needs to be. it might become more obvious as you start to add more types of items beyond what you currently have an each requires its own table

I'm not OP, I'm just defending his design. It has some tradeoffs but every design does. OP can make it work if he wants.
>as you start to add more types of items... each requires its own table
That sometimes is the case. It depends on just how different the types of items are and how specifically you want to model the data. Always tradeoffs.

>whole db locks up until you COMMIT or ROLLBACK
>Yes that's the whole point of transactions.
have you even ever written any multiuser backend code at all? the point of transactions is to run multiple write queries concurrently whithout corrupting your data (the AC and I in ACID). some might fail and have to be rolled back, but most will commit just fine. locking the whole db for every write access would create the mother of all bottlenecks.

Not OP again here, I'm tired. Signing off, sweet dreams Jow Forums.

You could use MS Access for that

The db engine itself is handling that. At some point the engine must lock up tables for very short amounts of time to allow indexes etc to update. That's the Consistent part of ACID. Do you know what a millisecond is? Hint, there are 1000 of them in one second.
p.s. I really really need bedtimes now. Gnite for real.

in which universe would
OPEN SELECT ITEMS AUTOCOMMIT CLOSE
perform any better than
SELECT ITEMS AUTOCOMMIT
on a pooled connection? remember, we're talking about high concurrent reads, so connections would be opened/closed constantly. like i said, for single user "runs on a raspi" scenarios all this doesn't matter, firing up the db by inetd for every request would probably still be fine.

You ought to call it something other than Master Table or the Social Justice Jerks will call you names.

What you want is this OP;
Link your specialised tables back to your master table not the other way around. No need for null keys this way.

I'm not sure what you're trying to say. If you want to believe that an application that has 1000000 reads plus 1000 writes is the same as 1000000 writes plus 1000 reads more power to you. Opinions are like assholes. Everybody has them but most of them stink.

>At some point the engine must lock up tables for very short amounts of time to allow indexes etc to update.
sqlite: maybe. something like postgres: no. an rdbms made for multi-connection use will keep transaction logs for each transaction and fail them when data gets dirty. That's the Independent part of ACID.
>Do you know what a millisecond is? Hint, there are 1000 of them in one second.
It's a tenth of the maximum select (read-only) query runtime allowed for some of the backends i committed code to. The same backend will (concurrently) run transactions that sometimes take multiple seconds to commit.

>I'm not sure what you're trying to say.
I'm trying to say that
1000000 reads plus 1000 writes
is less work than
opening 1001000 connections plus 1000000 reads plus 1000 writes plus closing 1001000 connections

Attached: ______.png (515x478, 14K)

...