Is TDD a meme?

Is TDD a meme?

Attached: 1_ieVWcSsJmeBbZFo6a_dL5g.png (384x350, 27K)

Other urls found in this thread:

en.wikipedia.org/wiki/List_of_NP-complete_problems
twitter.com/SFWRedditImages

But when is the software finished?

One thing I've never understood about TDD is how do you write a function that tests another function with 100% accuracy without just repeating the original function

You're only verifying the output of the function, not the implementation. This is why testing functions that perform effects (ie: doing things outside of itself) is difficult.

Yes but you can't completely verify the output of a function without reimplementing the function you're testing

I think it's reasonable in some domains, where the exact requirements of each component are known in advance and TDD allows you to verify correctness and iterate quickly. In practice though, most projects are not that well specified, and you are better off prototyping and iterating without worrying too much about correctness, then finalizing your interfaces and writing tests to match so you can refactor in the future.

In practice it is impossible to cover all possible test cases, so at best you make an educated decision about what cases are likely to cause issues and write tests for those, since they provide the most value. As issues are discovered, you can fix them and add new (regression) tests to verify they stay fixed in the future. It's really more of an art than a science. Also consider looking into black box vs white box testing - black box is generally more relevant for TDD since you can't consider implementation details when writing tests initially.

so in reality tests aren't objective and subject to human error just like the implentation so what's the point

Yes

>the current state of Jow Forums

Attached: Thinking_Renchon.png (290x290, 173K)

When you write an automated test, you at least know that the exact same error can't happen again. People working on a project change over time, forget things, etc. You can't guarantee that a test will cover all the behavior that you want, but it will at least cover some of it (and hopefully the most important parts). Even if you know the exact places where an issue will occur, there's no way of guaranteeing that a human programmer will check these areas for problems every single time they check in their code.

it's used in some areas where safety is maximum priority like embedded systems for planes and rockets kek

You can't practically verify all outputs but not for that reason. In theory if you tested all possible inputs you would be able to "prove" the function is correct. There are some mechanisms for automatically generating inputs: search for property based testing or generative testing.

Some languages go further and allow you to write programs that can be formally proved in the mathematical sense: search for Coq.

The point is that you can cover a lot of ground fairly cheaply. For example, consider a function that takes two arguments, both numbers, and returns another number. Some obvious candidates for inputs are zero, negative numbers, one, two, and an obscenely large number. If you're writing in a language which allows arbitrary types, this would be a good opportunity to also test null and other types like strings or arrays and verify that they either throw exceptions or return something sane (like NaN or zero).

Hand written inputs for unit tests are a good bang for your buck value, but obviously most inputs cannot be hand tested this way.

You can test all variations of input if the set of inputs is small enough, but you can't do it without reproducing the functionality of the function you're testing

That explaination makes more sense

If you're interested in exhaustive testing, you're correct, but consider that the implementations need not be the same. It might be possible to want a very fast implementation at runtime (using parallelism, implementation tricks, etc.), but an easy-to-verify, easy-to-read, easy-to-modify implementation to compare to for testing.

For most problems though, exhaustive testing simply isn't viable, so you go with the next best thing.

One more thing - if there is a known relation between the input and output of a function, and it's not easily invertible, you can write simple exhaustive tests by exploiting that.

For instance, a function that decomposes a product of prime numbers can be tested by verifying that the outputs are prime and their product is equal to the input, without requiring you to re-implement the behaviour in the test at all.

Why write tests if you don't write them before your code? TDD ensures that your tests represent what your code SHOULD do, not what it does. And it's far more important to verify that your function does what you intend it to, rather than what you made it do. Writing tests after you write code just confirms you know what your function does. Further, tdd is a good test to see if you're ready to write a piece of code. If you can't test it, you haven't parsed out the requirements enough. And TDD is not domain specific, if you write code, you should be writing tdd code.

TDD/BDD are wonderful for any software project regardless of size since it forces you to write only the minimal required code to solve the task.

What you are talking about is Unit testing which has nothing to do with Tdd. You should use both.

>And TDD is not domain specific, if you write code, you should be writing tdd code.
Unless you do enterprise or safety critical programming TDD is a waste of time

TDD is only useful for maintaining existing systems. You can pretty much guarantee that any organization using TDD isn't doing actual software development.

What would be the test code for this?
const square = × => ×**2;

It's this simple:
assert(square(2)).toEqual(4);

You're just testing inputs vs outputs

Attached: Nim language dot org.png (356x356, 45K)

yes but you're testing one case, not a very robust test, and you're also doing the test using code equivalent to what the function is running (square() vs x**2)

Glad there are finally some pro-TDD people in here sharing their perspective.

I tend to agree with all your points in theory; I've seen tons of tests that are either over-specified or miss the actual intended behavior because they were written after the fact.

My question is how do you deal with cases where your expectation doesn't match up with the reality of how your system actually behaves in the end? I mostly do system integration work, and oftentimes I start out with one interface, realize that in practice I need to take some additional input, connect to some other service, or provide some additional output, and my interface ends up changing drastically. Usually I look back and I'm glad I didn't start with tests because they would have to get rebuilt anyway.

Does TDD offer tools to get around this, or is this just a failure in my requirements gathering, system documentation, etc.?

I'm not sure that this holds in practice. If you code with writing the most minimal code that satisfies all the tests, this is certainly true, but there's nothing in particular about TDD that forces you to avoid writing extra code. Do you reinforce this in your CRs and make people remove code that isn't required to meet tests? Do you push people ignore points of extensibility in their initial designs?

That's literally the fucking point of testing you dumb fucking bitch. What the fuck are you gonna do everytime there's a minor change in anything? Go back and manually run the function every time? And use your imagination jesus christ. The square example is just a layman's intro. Get the fuck into more advanced shit if you want like fuzzy testing, correctness (so many different ways), and much more. Don't limit your mind to this one BS example I gave you .

Attached: no2.jpg (641x530, 25K)

so your example was bad yet I'm the stupid person?
For any case I can think of, testing to see whether the function produced the correct output just involves writing the function over again

you're not supposed to reimplement the function as a test, you're supposed to hardcode known values for edge cases then if the test ever fails you'll know the viewmodel broke

Feel free to look over any problem in en.wikipedia.org/wiki/List_of_NP-complete_problems

All of these have a fast verification that you could use to confirm a test case, where re-implementing the algorithm under test would be very slow (by necessity).

unit tests are a meme. instead spend your time writing comprehensive automation tests


to put it simply, your application could be broken and your unit tests would still pass if the tests didnt or couldnt cover that part of the code. But automation tests would catch anything that broke the UX flow

You can confirm *A* test case, you just can't confirm all or even most test cases, so all it does it kind of prevent people from going into your code and fucking things up rather than truly verifying the correctness of it

This is why you are a shit programmer. I bet you haven't written a single line of code in your life, you fucking role playing faggit.

How would you test a function that takes a string which are two dates separated by a hyphen, and finds the time different between them in hours?

You fucking write an assert statement using the function and what you know the answer will be. Literally a fucking test. It's shit you would do mentally but you put it down in code so you don't have to do confirm it every fucking time yourself.

Now take a brick and bash yourself in the eye, ya cunt.

Attached: helper.png (1063x1063, 245K)

literally

n o t h i n g
o
t
h
i
n
g

to do with writing tests.

What I by tdd is unit test driven development. When I write a function that parses X into Y or something like that, particularly a pure function, one that doesn't touch io or anything like that, tdd works. But on the project level, tdd doesn't tend to work as well, integration tests normally are best built after the fact in my opinion. There might be some method out there that I don't know about, but I've yet to come across it.

You've clearly never worked in any sort of production environment, are very junior, or just a bad developer. That's like saying zoos should only put barriers around the lions and Tigers because only they'll maul people to death. Testing literally has no impact on security, it's documentation of what code is meant to do coupled with verification that it does what it was intended to do. Failing code on a companies website can cause millions of dollars in sales losses even with zero impact on security.

That makes literally no sense. You can't tdd code that already exists. Also, individual developers make the right choice to use tdd normally, not their company.

Confirming a single test case does not confirm the function works properly under all circumstances, whats the point?

Use fuzzy testing or contract testing or correctness approaches. You don't gotta be exact. Just get thr edge cases and a few normal cases.

Now take a shotgun and eat a shell you insufferable bitch.

Attached: d24.jpg (680x651, 121K)

>You've clearly never worked in any sort of production environment, are very junior, or just a bad developer.
I write code on my own, I don't need to test it because I know what the fuck I'm doing, that doesn't make me a bad developer

its not supposed to test all the cases, its supposed to test that its 'basically hooked up right'. make sense?

Queue 6 months from now when you go back and make changes and you forget ho everything works.

Join an actual company faggit. Then you'll get why people test. And it's not just for teams.

Fuck you :)

Attached: 1528908856271.png (380x349, 77K)

Once again, what's the point? Making sure other idiot developers don't come in and break your code? Sounds like a real productive work environment you have there
Please keep being needlessly hostile and posting more Jow Forums memes, it makes you look so intelligent

Writing test first limits your thinking.

Project I'm working on right now is 9 years old, works, not buggy, not a unit test in sight

Yea not breaking code is literally the reason for tests.

False. Test driven development is a sign of a mature ability to plan out the code you'll implement.

Ah, get a real job, get blow out of the water by developers that write tests and then let me know how that arrogance is treating you.

TDD is the one of the big differences between writing code and software engineering.

You are 100% right. Test don't prove correctness, they prevent regressions.

If you want to prove the correctness of your implementation, you can write a formal proof. In practice people will do this for algorithms, but never for code, because there is so much complexity in your compiler, instruction set, physical CPU, etc.

Consider int add(int a, int b) { return a+b; }.
If you make sweeping assumptions about the lower-level implementation details, you can write a proof of correctness for this "algorithm". But if your compiler misinterprets it (maybe someone redefined "+"?), or if another thread changes a memory value, or any number of other things happen, the proof is irrelevant. However, a simple test of "assert(add(1,2) == 3)" would catch it.

Yes this is a toy example, and nobody at NASA concerns themselves about basic add() functions. I'm sure you can use your imagination to come up with more interesting and realistic cases.

Also, even if you assume the environment works exactly as you expect in your proof, nobody wants to re-write it every time you make a non-functional change to the code. Unit tests are flexible to any implementation.

Finally, as others have said, unit testing is a tool of TDD, but the two are fundamentally separate. If you want to have a discussion about why unit testing is a waste of time, feel free to make your own thread.

>Ah, get a real job
why become a wagecuck when I make enough money on my own time without having to put up with retarded shit like TTD

Even as someone who is not a TDD advocate, I disagree with this. Tests allow you to make changes freely and experiment with new ideas without worrying about whether you missed a case that will blow up your project.

Even if I don't like writing tests that much, I love working on code with good test cases because I can make whatever changes I want and let the tests tell me if I made a mistake.

You write tests for a few reasons.

1. To verify that the code fails gracefully when it should fail;
2. To test that the code produces accurate results in a few cases where the correct output is known;
3. To test that the code behaves appropriately in extreme situations.

Once you get beyond CS101-level, 100-line programs, you absolutely need tests to verify that the code is doing what you think it's doing.

Having a suite of tests allows you to safely modify code later. Just run your test suite after the changes are made to verify that you didn't break anything. Can you test every case? Trivially, no. But you can test enough cases (interior, edge, and absurd) which should cover the cases you don't test.

I spend more time writing tests than I do writing production code.

Because you would make more if you stopped acting like a child, adopted good practices, and then worked by yourself. The fact that you think tdd is "retarded shit" tells me exactly how mediocre your code is. If you had a real argument for why it's a bad idea, then maybe I'd believe you. But you act like you're above making mistakes which is the first sign you'd be a junior dev at BEST. I certainly wouldn't hire you, or trust you to write code for me.

>Writing tests after you write code just confirms you know what your function does
You say that like that isn't already incredibly useful
>write code that you think does X
>afterward, write test that asserts the code does X
>test fails
>discover that code actually does Y, not X

Software is never finished

I did the same thing for some of my own pet projects. Wrote the app, worked great. Went in and made some changes, and suddenly stuff didn't work. Eventually figured out what the problem was, but it would have been caught immediately if the tests were there to begin with.

The issue isn't "how many bugs do you have in the code" but "how quickly can you identify there was a bug and what caused it". By writing the tests, you find issues with functionality immediately, and it becomes almost trivial to see which changes broke it.

Yep, this is the base case for TDD, 100% correct.

You make more money when you own your own business than having a job
It's a bad idea because it's a waste of time, nobody is above making mistakes, but writing a set of comprehensive tests before you've even made the system is counter-productive, you might not even be sure how the system is going to turn out
I do informal, temporary tests after the system is finished

Ahhh, so you waste massive amounts of time by not planning out your system correctly. You're a bad developer, just admit it.

I didn't say you'd make more money as an employee, I said your business would make more money if you were an adequate developer.

Stop refactoring to break the loop and release the software already.

You missed his point. He's saying that if you write the function and then the test afterwards, it's easy to accidentally just write a test that passes based on the function. What he wants you to do is write a test for the intended functionality first, then write feature code that makes the test pass.

You might think "i'll always write a test for the intended functionality, not one that just passes", but it's easier than you think to make a mistake like that without realizing it.

This is everything wrong with technology

>Ahhh, so you waste massive amounts of time by not planning out your system correctly
Of course not
Planning is 90% of writing the program
Writing pre-emptive tests is a very bad way of "planning" things that aren't closed systems though

>I don't need to test it because I know what the fuck I'm doing
This, but unironically. Most code simply does not need unit tests. If you're writing a thing with a dozen fiddly corner cases, then sure. But if you've designed the system properly, then most of the code should be fairly straightforward, and the only failure mode you're likely to see is "function is completely broken on nearly all inputs", which will be caught by even the most cursory integration tests (including just manually running the program to see if it crashes).

Of course, this all assumes you're using a language with an actual modern type system.

Continues integration and continues development is essential for keeping software healthy and well maintained.

Imagen a studio like Epic Games with its hundreds if not thousands of Developers/Map Designers/Artist having to make constant changes to their codebase to stay relevant and profitable. The team needs to track and ensure the quality of every line of code that goes in.

Continues integration, continues development, and tests are how teams achieve such a rapid return rate.

Attached: testdev.jpg (1822x874, 182K)

artists don't make changes to codebases

once your project starts growing in size, it will become impossible to manually test efficiently
that's why you use TDD - you write out a set of actions and the results you expect. in fact, i will probably forget to manually retest something because that was not my primary focus, changing another method was an afterthought. turns out that i accidentally caused an off by 1 error or something, and the tests will catch chat before it goes into production.
with traditional unit tests, you write the method first since you'll probably have to mock out specific objects, and you wont know what to mock until you finish the method

But their contributions do affect the code base. What if a model is too large for the engine to handle. These things need to be tracked at that scale, and they need to be tested.

Literally engage your braincells and think about the assertions you're writing, instead of mashing buttons like a chimpanzee until the light turns green.
This is only confirming my suspicion that the main benefit of TDD is in preventing stupid people from hurting themselves.

Yeah but what about when describing the results you expect is more complex than writing the fucking algorithm in the first place

Then you haven't abstracted the problem enough. A test won't reveal the theory of everything. Just assert that the methods you used to arrive at that theory are sound and well proven.

im assuming you're talking about testing a webapp?
that's easy, just have some seed/test data so you can control exactly what results you will get and customize it so you can have edge cases in your test data
if you're talking about testing a method, asserting what a method returns is trivial, even if you're verifying an ldap3 connection object has specific flags or some shit, just assert the values of the attributes

For people familure with python here is some test code using pytest.
class TestClass:
def testNull(self):
assert func(None) == None
def test10(self):
assert func(10) == 11
def testBeeg(self):
assert func(10000000000000000000000000000000000000000000) == 10000000000000000000000000000000000000000001
def testNeg(self):
assert func(-10) == -9
def testBeegNeg(self):
assert func(-10000000000000000000000000000000000000000000) == -9999999999999999999999999999999999999999999
def testlist(self):
assert func([]) == None
def func(x):
if x is None:
return None
else:
return x + 1

And its output.
class TestClass:
def testNull(self):
assert func(None) == None
def test10(self):
assert func(10) == 11
def testBeeg(self):
assert func(10000000000000000000000000000000000000000000) == 10000000000000000000000000000000000000000001
def testNeg(self):
assert func(-10) == -9
def testBeegNeg(self):
assert func(-10000000000000000000000000000000000000000000) == -9999999999999999999999999999999999999999999
def testlist(self):
assert func([]) == None
def func(x):
if x is None:
return None
else:
return x + 1

>Then you haven't abstracted the problem enough
I have a physics system for a game
I can write complex, geometrical equations to see if everything responds correctly or I can just start it up and move around and run into corners and see if it reacts how it's supposed to

Got a story for you guys.
>Backend API Dev for an airline
>All of Tech as this company is undergoing changes
>Moving from scrum to safe
>Massive cloud migration
>New internal security protocol has to be adopted by everyone
>etc...
>And every team has to keep a proven record of TDD (based on commit history)
>We ask why
>Turns out some clueless business fuck heard about TDD and how great it is and decided every single one of the 125000 developers at the company should adopt it
>Ask if we can, like, not
>Guy comes back with "if we find out you're not practicing TDD, we will have to have someone check all of your future commits to make sure you're practicing TDD and using selenium"
>SELENIUM
>FOR RESTFUL APIS
And this is how my absolute disgust for business faggots in tech and 'solution architects' came about
I think I'm gonna work for IBM instead soon so hopefully they understand what the fuck they're doing and I can leave this shithole of a company

Wrong output :P
$ pytest
============================= test session starts ==============================
platform linux -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /builds/xxxx/Pytests, inifile:
collected 6 items

test_main.py .....F [100%]

=================================== FAILURES ===================================
______________________________ TestClass.testlist ______________________________

self =

def testlist(self):
> assert func([]) == None

test_main.py:15:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = []

def func(x):
if x is None:
return None
else:
> return x + 1
E TypeError: can only concatenate list (not "int") to list

test_main.py:20: TypeError
====================== 1 failed, 5 passed in 0.13 seconds ======================

Do you want to run into corners every time you run your game? Doing that starts to become boring and time consuming when you want to also test other functionality. By writing the test, you can verify functionality instantly every time, letting you worry about other things.

Once you start adding more functionality, doing manual tests like that become very cumbersome, where spending the time upfront to write the test will save you all of that time later.

>Do you want to run into corners every time you run your game?
No, but I'm not modifying the physics system every time I run the game am I? If I make modifications to it then I can test it manually
Dependency spaghetti where something in one seemingly unrelated area breaks something in another is just bad software architecture

based nim poster

>be embedded developer
>have never worked in a TDD shop
>neither have any of my coworkers

TDD is literally waterfall, the testing just comes before the implementation. If you're implementing a parser for a standard protocol or something else that is clearly specified in the beginning and not likely to change, it's just as fine as waterfall. Agile tends to be better for the rest 99% of software projects, so TDD is a bad fit for them. It's pointless to test for regressions in code where you don't have anything that can regress, since you don't actually know what the code is really supposed to do yet. You're just adding more work, more code, and more things that can break.

What company so I can avoid it?

Aaaaand fuck reading the rest of this thread. I’m out

Boeing?

It seems you never refactored a decently complex program

I usually test refactors by running the program and seeing if it works or not

>I usually test refactors by running the program and seeing if it works or not
This guy.

Well I often don't know what I even need before I start.

of course TDD is too hard for you and your coworkers, bet you faggots are still using waterfall

Imagine having to use some sort of programming ritual like agile or waterfall or scrum instead of just being able to get shit done