|
Crypto backtesting
|
So this is just some thoughts on my first non-trivial project using Copilot.
At the time of writing most companies are resistant to using Copilot.
Most senior developers think their code is unique; actually most things have been done before. If you're doing something that is genuinely new, then are you changing the world or writing unmaintainable code?
A lot of it is just being experienced and knowing the computer science terminology. Then you can quickly recognise if a solution is good enough, or even write something that works in an unfamiliar language.
There are concerns about correctness of code; but if it passes the tests/benchmarks and is in a similar style to the rest of the codebase then what is correct? All significant codebases have dark corners that are not well understood, with terrible code that goes against the coding standard and is untested but works. Well, it seems to work. Until you find it works but for the wrong reasons.
If you know how to code already then it's basically a very clever code completion tool.
It also suggests things that I hadn't considered but are actually quite neat solutions. And language features that I didn't know existed.
After a month or so I ran the standard sloccount tool on the project. I have always quite liked it as a rough finger in the air, but perhaps we need new ways to measure these things?