MAMEWorld >> EmuChat
View all threads Index   Threaded Mode Threaded  

Pages: 1

italieAdministrator
MAME owes italie many thank yous, hah
Reged: 09/20/03
Posts: 15246
Loc: BoomTown
Send PM


AWESOME paper on beating Super Mario A.I. with RAM analysis
#307298 - 04/15/13 02:07 AM


http://www.geek.com/games/computer-program-learns-to-play-classic-nes-games-1552024/



Tomu Breidah
No Problems, Only Solutions
Reged: 08/14/04
Posts: 6819
Loc: Neither here, nor there.
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: italie]
#307301 - 04/15/13 04:26 AM


LOL at the Tetris part.

*algorithm pauses game*

"The only winning move is not to play"



LEVEL-4



DMala
Sleep is overrated
Reged: 05/09/05
Posts: 3989
Loc: Waltham, MA
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: italie]
#307305 - 04/15/13 05:10 AM


> http://www.geek.com/games/computer-program-learns-to-play-classic-nes-games-1552024/

Pretty fascinating stuff.

I could have done without the unnecessary jump cuts in the video, or his guitar, e-drums, and fixie conspicuously in the background, though.



R. Belmont
Cuckoo for IGAvania
Reged: 09/21/03
Posts: 9716
Loc: ECV-197 The Orville
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: DMala]
#307321 - 04/15/13 06:14 PM


> >
> http://www.geek.com/games/computer-program-learns-to-play-classic-nes-games-1552024/
>
> Pretty fascinating stuff.
>
> I could have done without the unnecessary jump cuts in the video, or his guitar,
> e-drums, and fixie conspicuously in the background, though.

I used to think Josh Topolosky from The Verge should be pictured in the dictionary next to "hipster". This video gives us a new challenger.

Skip to 8:16 and you'll miss all that and just see how the program plays games (spoiler alert: a human speedrunner would wipe the floor with it, let alone a TAS. But it's kind of cool to watch).



StilettoAdministrator
They're always after me Lucky ROMS!
Reged: 03/07/04
Posts: 6472
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: R. Belmont]
#307335 - 04/15/13 08:25 PM


> Skip to 8:16 and you'll miss all that and just see how the program plays games
> (spoiler alert: a human speedrunner would wipe the floor with it, let alone a TAS.
> But it's kind of cool to watch).

Maybe I misunderstood but I thought his algorithm was optimizing for points as well as completing the level in time...

- Stiletto



Anonymous
Unregistered
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: Stiletto]
#307342 - 04/15/13 11:16 PM


> Maybe I misunderstood but I thought his algorithm was optimizing for points as well
> as completing the level in time...

It looks for outcomes that cause numbers to go up & it doesn't look ahead too far.

Teaching it about time limits would be a whole new thing.



StilettoAdministrator
They're always after me Lucky ROMS!
Reged: 03/07/04
Posts: 6472
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: ]
#307345 - 04/15/13 11:27 PM


> > Maybe I misunderstood but I thought his algorithm was optimizing for points as well
> > as completing the level in time...
>
> It looks for outcomes that cause numbers to go up & it doesn't look ahead too far.
>
> Teaching it about time limits would be a whole new thing.

Errr, yeah, you're right, and I knew that last night when I watched it.

Anyhow, I thought most speedruns didn't optimize for points going up, just speed at which levels completed.

- Stiletto



Firehawke
Manual Meister
Reged: 08/12/06
Posts: 665
Send PM


Neat try, but not really comparable. new [Re: R. Belmont]
#307360 - 04/16/13 04:40 AM


Yeah, I don't think the thing is capable of recognizing and taking advantage of the wide range of tricks that humans can use. Given enough time, it'll come up with a serviceable SMB1 run to completion, but I wouldn't call it even remotely comparable to even what a non-specialist speedrunner could do with the most basic tricks you learn by playing.

Still a neat trick, though. Reminds me of the Pac-Tape project a number of years back, which was designed to find new level patterns for Pac-Man.



---
Try checking the MAME manual at http://docs.mamedev.org



R. Belmont
Cuckoo for IGAvania
Reged: 09/21/03
Posts: 9716
Loc: ECV-197 The Orville
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: Stiletto]
#307393 - 04/16/13 05:28 PM


> Anyhow, I thought most speedruns didn't optimize for points going up, just speed at
> which levels completed.

There are different kinds of speedruns; "100%" means you get everything (which doesn't necessarily mean max possible score but often does) whereas "any%" means you do the bare minimum to get a YOU WIN ending, and many games have unique categories. For Castlevania SOTN, you can run 100%, any%, or all bosses, with or without "zips" (using game bugs to warp through walls), as Alucard or Richter (or Maria on the Saturn version). And that's not including real hardware vs. Sony's official emulator on the PS3/PSP/Vita (which has similar but not quite identical load times and slowdown) vs. the Xbox Live port (which has no slowdown or load times, so it's the most popular to run now).

Thus, it's possible for a single game to have a dozen or more unique world records at the same time



Anonymous
Unregistered
Send PM


Re: Neat try, but not really comparable. new [Re: Firehawke]
#307641 - 04/22/13 12:01 PM


> Given enough time, it'll come up with a
> serviceable SMB1 run to completion, but I wouldn't call it even remotely comparable
> to even what a non-specialist speedrunner could do with the most basic tricks you
> learn by playing.

The only point it learns is when you train it what bytes are important by playing the game. So if you fed it a speed run then it might learn different rules.

Otherwise the only way to get it to play better is to make the search longer.

But the fact that it can actually learn to play it at all is the impressive thing, that it exploits the stomping at the top of a jump is cute but no more impressive.



Bryan Ischo
MAME Fan
Reged: 03/28/10
Posts: 358
Send PM


Re: AWESOME paper on beating Super Mario A.I. with RAM analysis new [Re: italie]
#307660 - 04/22/13 09:53 PM


> http://www.geek.com/games/computer-program-learns-to-play-classic-nes-games-1552024/

It's not as cool as it sounds. To boil it down to the most basic features, the author has created a program that:

a) Evaluates the entire 2048 byte chunk of memory of an NES system without any actual understanding of what any of it means, only that some values can be seen to go "up", and "up is good".

b) Runs through an emulator, trying random inputs and finding the sequence of inputs that increases the memory locations for which it was already determined that "up is good" the most

c) Uses some modicom of smart heuristics to reduce the search space (instead of trying all 256 possible input combinations every frame, have a better algorithm for only evaluating those inputs that are most likely to be good)

d) Uses some other mechanisms that I didn't read all of the details about to try to find workarounds to the inevitable shortcomings of this simplistic approach

e) After chewing for a long time comes up with an input sequence that was able to raise the important memory location values the most

It's not a sophisticated A.I. by any stretch of the imagination. It's a simple search function with some tricks to try to make it more effective, combined with a novel (although fairly simplistic) mechanism for evaluating the search function (i.e. trying to find values in memory that can consistently be made to increase).

It takes an hour of CPU computation just to output the inputs for 16 seconds of gameplay. It cannot play the game real time. Furthermore, the program doesn't actually "exploit bugs in the game", in the sense of knowing about them and anticipating and using them. If a random input sequence at a specific moment of the game happened to result in Mario surviving due to a game bug, then so be it. The A.I. will not use that information to help it plan a future move.

Additionally, the author somewhat self-aggrandizes the program with the long "white paper" title and his video. And the white paper itself is more like an informal email describing what he did and then put into the shape of a white paper, than an actual white paper meant to be taken seriously. I've never seen a white paper use the term "horse shit" before (nor the dozens of other informalities it also uses), for example.



italieAdministrator
MAME owes italie many thank yous, hah
Reged: 09/20/03
Posts: 15246
Loc: BoomTown
Send PM


Okay Debbi Downer... new [Re: Bryan Ischo]
#307679 - 04/23/13 02:11 AM


Take it for what it is, not for what it could be. The idea is novel, yes, but fascinating none the less.

I agree he's a douche...

> >
> http://www.geek.com/games/computer-program-learns-to-play-classic-nes-games-1552024/
>
> It's not as cool as it sounds. To boil it down to the most basic features, the author
> has created a program that:
>
> a) Evaluates the entire 2048 byte chunk of memory of an NES system without any actual
> understanding of what any of it means, only that some values can be seen to go "up",
> and "up is good".
>
> b) Runs through an emulator, trying random inputs and finding the sequence of inputs
> that increases the memory locations for which it was already determined that "up is
> good" the most
>
> c) Uses some modicom of smart heuristics to reduce the search space (instead of
> trying all 256 possible input combinations every frame, have a better algorithm for
> only evaluating those inputs that are most likely to be good)
>
> d) Uses some other mechanisms that I didn't read all of the details about to try to
> find workarounds to the inevitable shortcomings of this simplistic approach
>
> e) After chewing for a long time comes up with an input sequence that was able to
> raise the important memory location values the most
>
> It's not a sophisticated A.I. by any stretch of the imagination. It's a simple search
> function with some tricks to try to make it more effective, combined with a novel
> (although fairly simplistic) mechanism for evaluating the search function (i.e.
> trying to find values in memory that can consistently be made to increase).
>
> It takes an hour of CPU computation just to output the inputs for 16 seconds of
> gameplay. It cannot play the game real time. Furthermore, the program doesn't
> actually "exploit bugs in the game", in the sense of knowing about them and
> anticipating and using them. If a random input sequence at a specific moment of the
> game happened to result in Mario surviving due to a game bug, then so be it. The A.I.
> will not use that information to help it plan a future move.
>
> Additionally, the author somewhat self-aggrandizes the program with the long "white
> paper" title and his video. And the white paper itself is more like an informal email
> describing what he did and then put into the shape of a white paper, than an actual
> white paper meant to be taken seriously. I've never seen a white paper use the term
> "horse shit" before (nor the dozens of other informalities it also uses), for
> example.



Bryan Ischo
MAME Fan
Reged: 03/28/10
Posts: 358
Send PM


Re: Okay Debbi Downer... new [Re: italie]
#307681 - 04/23/13 02:27 AM


> Take it for what it is, not for what it could be. The idea is novel, yes, but
> fascinating none the less.
>
> I agree he's a douche...

I also agree that it is entertaining and a novel approach to writing a game playing "A.I.". I had meant to thank you for posting the link but forgot to as I got caught up in reading the white paper.

I was thinking after I wrote my original post that I did sound kind of down on it, which is just an artifact of my typically abrupt writing style.

I'm all for people having creative ideas and trying them out. I was just trying to address the perception that what had been accomplished was more than it actually was - especially the notion that this playfun thing was 'taking advantage of glitches'. It was really just benefitting from glitches that happened to occur when it threw down some random controller inputs, in a one-off fashion.

Edited by Bryan Ischo (04/23/13 02:34 AM)



Tomu Breidah
No Problems, Only Solutions
Reged: 08/14/04
Posts: 6819
Loc: Neither here, nor there.
Send PM


Re: Okay Debbi Downer... new [Re: Bryan Ischo]
#307688 - 04/23/13 05:00 AM


> I'm all for people having creative ideas and trying them out. I was just trying to
> address the perception that what had been accomplished was more than it actually was
> - especially the notion that this playfun thing was 'taking advantage of glitches'.
> It was really just benefitting from glitches that happened to occur when it threw
> down some random controller inputs, in a one-off fashion.

The late artist Bob Ross had a name for that.

Happy little accidents.



LEVEL-4



DMala
Sleep is overrated
Reged: 05/09/05
Posts: 3989
Loc: Waltham, MA
Send PM


Re: Okay Debbi Downer... new [Re: Bryan Ischo]
#307698 - 04/23/13 07:27 AM


> > I agree he's a douche...

On a scale from Mr. Rogers to Tucker Max he rates a Charlie Sheen.

> I also agree that it is entertaining and a novel approach to writing a game playing
> "A.I.".

I think this misses the point a little. It's not about writing an AI to play a game, clearly there are much more effective ways to do that. This paper is about writing a program that can *learn* to play a game, which is a rather different and much more difficult task. You give it some sample data, just to teach it which bits are the ones that it wants to increase, and then it tries to figure out the best combination of inputs to achieve that goal. That's part of the reason why the criteria for "better" are so simplified. It's relatively easy for a program to compare memory states and determine whether certain bits went up or not. Doing something like trying to parse the video output the way a human would, would be insanely complex to the point of being impossible. It turns out humans can do an amazing amount of processing, particularly in the visual realm, with basically no conscious thought at all. What's interesting about this paper to me is that it has implications beyond just playing video games. Machine learning is kind of a hot topic right now, as it has applications for things like making suggestions based on past behavior, a la Netflix or Amazon.



Bryan Ischo
MAME Fan
Reged: 03/28/10
Posts: 358
Send PM


Re: Okay Debbi Downer... new [Re: DMala]
#307702 - 04/23/13 08:36 AM



> I think this misses the point a little. It's not about writing an AI to play a game,
> clearly there are much more effective ways to do that. This paper is about writing a
> program that can *learn* to play a game, which is a rather different and much more
> difficult task.

Except the program doesn't "learn" anything. It just tries different random inputs every frame over and over again and then keeps the ones that gave the best results. It has some heuristics for trying to guess the best next input and also for backtracking to a previous state and starting over again when it thinks it can do better. That's basically it. In the end you get a canned set of inputs that it determed were the best. If you feed it a different starting point then it will do the whole thing over again without having "learned" anything about what the best choices to make are. And it takes an hour to generate 16 seconds of best-case random-button-mashing.

The author even plays back a segment of one run and talks about the actions of Mario like the game "meant" to do what it's doing. Mario is running right and tosses some fireballs just at the right time to hit some Goombas that haven't even appeared on screen yet (when they do appear, the fireballs are there just in time to hit them). The game didn't "plan" this move; it just so happened that when trying one out of the thousands of input combinations around that point in the game one of them happened to be sending a fireball at the exact moment that was going to end up corresponding to a hit on a Goomba. The playfun algorithm then detected that if it saved that particular input sequence, it resulted in a higher score. So it was kept.

It's kind of just randomly mashing buttons over and over again and then keeping the "luckiest" sequence.

Not that the result isn't cool - if you can wait an hour to enjoy 16 seconds of really lucky button mashing then it's fun to watch. But the thing is in no way "learning" or trying to model any kind of decision process. It's just searching through button mash sequences and finding the best one.



R. Belmont
Cuckoo for IGAvania
Reged: 09/21/03
Posts: 9716
Loc: ECV-197 The Orville
Send PM


Re: Okay Debbi Downer... new [Re: Bryan Ischo]
#307715 - 04/23/13 08:05 PM


> Not that the result isn't cool - if you can wait an hour to enjoy 16 seconds of
> really lucky button mashing then it's fun to watch. But the thing is in no way
> "learning" or trying to model any kind of decision process. It's just searching
> through button mash sequences and finding the best one.

It's a bit similar to sinister1's blindfolded Punch-Out!! run: not necessarily a useful or fun way to play the game, but it's fun to watch



GatKong
Tetris Mason
Reged: 04/20/07
Posts: 5907
Loc: Sector 9
Send PM


Re: Neat try, but not really comparable. new [Re: ]
#307716 - 04/23/13 08:38 PM


May I ramble on the learning part?

Whilst in the Army, we were tryign to design software that could find tanks hidden in the bushes, and decern friendly from enemy... ultimately of course to destroy enemy ones with a robotic tank killing machine. We used photographs to train the computer.

My point is that you can't always control what data points the computer is using for its value judgements. This guys game learned to increase scores etc, and may have found "exploits", but here's what happened to us.

We were 100% on all our test photographs at identifying tanks and labeling only enemy ones.

Woot.

When it came time to demonstrate this live for the top brass... the stupid fucking thing found and "destroyed" every single tank without discriminating enemy from friendly, and the whole thing was scrapped as worthless.

Only after looking at it later why it failed so miserably did we figure out....
All the photos of the enemy tanks were taken at the beginning of the day... and all the friendly tanks were taken at the end of the day...

So the computer DID learn to discern the tanks... by the position of the sun! And we did the demonstration in the early day... thus, every tank was judged as enemy.

In retrospect, the program was a success... but alas the funding was gone and it was dead int he water.


Personally, I found the guy's video awesome. He's helpign the computer focus on the important by eliminating distracting variables like visual outputs and music, etc. Baby steps.

Imagine someday a computer than can detect a liar because of a "tell" which people ignore, but which the computer discovered and exploits.

All good.







Bryan Ischo
MAME Fan
Reged: 03/28/10
Posts: 358
Send PM


Re: Neat try, but not really comparable. new [Re: GatKong]
#307754 - 04/24/13 09:50 PM


But the thing you guys keep missing, which is discussed a few posts above, is that the playfun game-playing program doesn't 'learn' anything. It just searches button mashing inputs for the best sequence. When the dude found that his search algorithm was not getting good results, he manually implemented heuristics to allow his search function to get better results. And repeated that process a few times. The algorithm didn't learn how to play Mario better - its author just learned how to alter the algorithm to do better searches in less time.

The playfun program takes just as long to find the best set of inputs for a sequence of gameplay the first time it's run as it does the 100th time. It doesn't 'learn' anything.

Also, no offense, but I find your Army story a little implausable. Surely anyone implementing such a system would do some trial runs before demoing the thing. And surely you wouldn't just happen to have every trial run use the same photos of tanks as the others, and surely the additional photos of tanks used in the trial runs wouldn't just also happen to have enemy tanks always photoed later in the day than friendly tanks. And even if you were ... careless ... enough to only use photos sourced from the same places for all training, dry run, and demoing purposes, surely you had some way to inspect the program to see why it was making the choices it made (i.e. highlight on the photo those pixels which contributed to the friend/foe decision), and would have seen right away that the thing was always focusing on the leaf color or shiny surfaces facing west or whatever.

Edited by Bryan Ischo (04/24/13 09:56 PM)



RetroRepair
MAME Fan
Reged: 12/21/09
Posts: 259
Send PM


Re: Okay Debbi Downer... new [Re: R. Belmont]
#307766 - 04/25/13 03:06 AM


> It's a bit similar to sinister1's blindfolded Punch-Out!! run: not necessarily a
> useful or fun way to play the game, but it's fun to watch

This guy can play the game way better than I can without the blindfold!



http://www.youtube.com/retrorepair



DMala
Sleep is overrated
Reged: 05/09/05
Posts: 3989
Loc: Waltham, MA
Send PM


Re: Okay Debbi Downer... new [Re: RetroRepair]
#307779 - 04/25/13 06:26 AM


> This guy can play the game way better than I can without the blindfold!

You've got to feel the Force, let it flow through you.



R. Belmont
Cuckoo for IGAvania
Reged: 09/21/03
Posts: 9716
Loc: ECV-197 The Orville
Send PM


Re: Neat try, but not really comparable. new [Re: Bryan Ischo]
#307788 - 04/25/13 04:52 PM


> Also, no offense, but I find your Army story a little implausable.

You've never met any Mil-Spec software engineers, I gather? I used to have lunch semi-regularly with a group of them. I completely believe this story



Anonymous
Unregistered
Send PM


Re: Okay Debbi Downer... new [Re: Bryan Ischo]
#307789 - 04/25/13 04:53 PM


> Except the program doesn't "learn" anything.

It does learn, you play the game and it learns what values should go up.

It's no different to animals learn, just assume that the computer associates those numbers going up with pleasure.

> It just tries different random inputs
> every frame over and over again and then keeps the ones that gave the best results.
> It has some heuristics for trying to guess the best next input and also for
> backtracking to a previous state and starting over again when it thinks it can do
> better. That's basically it.

When it's actually playing the game then yes. But then again no different to how animals react. Try something, if it doesn't result in what you want then go back and try again.

I don't know exactly how it picks the next move, it could try them all or random or something entirely different. Although that is not exactly relevant to how it actually plays.

Edited by smf (04/25/13 04:56 PM)



Vas Crabb
BOFH
Reged: 12/13/05
Posts: 4462
Loc: Melbourne, Australia
Send PM


Re: Neat try, but not really comparable. new [Re: R. Belmont]
#307811 - 04/26/13 03:31 AM


> > Also, no offense, but I find your Army story a little implausable.
>
> You've never met any Mil-Spec software engineers, I gather? I used to have lunch
> semi-regularly with a group of them. I completely believe this story

I used to work with one, and found her modus operandi quite disturbing. She would write a spec, build something that met the spec precisely, then pat herself on the back for a job well done ignoring whether it solved a real-world problem or not. More often than not you ended up with software that didn't actually do what it needed to, and when you re-wrote it to actually work properly, you'd end up with about 20% of the lines of code afterwards. Lots of code achieving nothing at all. She was dropped in a round of redundancies.



Anonymous
Unregistered
Send PM


Re: Neat try, but not really comparable. new [Re: Bryan Ischo]
#307953 - 04/28/13 12:34 PM


> But the thing you guys keep missing, which is discussed a few posts above, is that
> the playfun game-playing program doesn't 'learn' anything.

I didn't miss that, I thought I've been quite clear on the matter.

Currently it just has the type of knowledge you are born with, like when you are hungry you feel pain and so you cry.

playfun would need a whole lot more good and bad senses and the ability to convert those into short term and long term memory as well to actually learn how to play.



Anonymous
Unregistered
Send PM


Re: Neat try, but not really comparable. new [Re: Vas Crabb]
#307954 - 04/28/13 12:37 PM


> and when you re-wrote it to actually work properly, you'd end up with about 20% of
> the lines of code afterwards. Lots of code achieving nothing at all. She was dropped
> in a round of redundancies.

I'm suprised that she was let go and not promoted. She seems to do what she is told and that is deemed more important by most management.

Edited by smf (04/28/13 12:37 PM)



italieAdministrator
MAME owes italie many thank yous, hah
Reged: 09/20/03
Posts: 15246
Loc: BoomTown
Send PM


Re: Neat try, but not really comparable. new [Re: ]
#307958 - 04/28/13 03:55 PM


> > and when you re-wrote it to actually work properly, you'd end up with about 20% of
> > the lines of code afterwards. Lots of code achieving nothing at all. She was
> dropped
> > in a round of redundancies.
>
> I'm suprised that she was let go and not promoted. She seems to do what she is told
> and that is deemed more important by most management.

You aren't kidding. I work for a bunch of ex military currently. Meeting the goal "Commanded" by the higher ups is job one.

Them: "You fix that calibration code issue?"
Me: "Yes, the code is approved by QC and ready to deploy to the field."
Them: "Whoa, we're going to need a code update for this issue?"
Me: "Ummmm...yes? The instructions were to fix the calibration code issue right?"
Them: "[Upper Management] doesn't want any software deviations on field machines. Is there any way to fix them without a code change?"

Me: [Thinks for a minute, assesses the seriousness of the request]

Me: "I guess you could shim the receiver mount with something, but I wouldn't recommend..."
Them: "Great! Get that fix out to the field stat! I want a tech write-up on my desk in an hour"



Anonymous
Unregistered
Send PM


Re: Neat try, but not really comparable. new [Re: italie]
#308003 - 04/29/13 04:12 PM


> Me: "I guess you could shim the receiver mount with something, but I wouldn't
> recommend..."
> Them: "Great! Get that fix out to the field stat! I want a tech write-up on my desk
> in an hour"

Rookie mistake, if it's not something you'd recommend then you don't volunteer it. Mentioning high risk is also good, when you say you don't recommend something they hear "I'm too lazy to do that, it doesn't sound like fun".

Code freezes are only there to stop programmers sneeking changes in, management will always sanction a change if they need something enough. They aren't good at assessing risk at all.

Edited by smf (04/29/13 04:18 PM)



R. Belmont
Cuckoo for IGAvania
Reged: 09/21/03
Posts: 9716
Loc: ECV-197 The Orville
Send PM


Re: Neat try, but not really comparable. new [Re: ]
#308006 - 04/29/13 05:07 PM


> Code freezes are only there to stop programmers sneeking changes in, management will
> always sanction a change if they need something enough. They aren't good at assessing
> risk at all.

Depends. I had a manager who was a former hotshot SGI programmer and holder of several patents. You couldn't bullshit *anything* with her.



Anonymous
Unregistered
Send PM


Re: Neat try, but not really comparable. new [Re: R. Belmont]
#308009 - 04/29/13 06:23 PM


> Depends. I had a manager who was a former hotshot SGI programmer and holder of
> several patents. You couldn't bullshit *anything* with her.

I'm not suggesting tellings lies, just that there is a disconnect between how most programmers speak and most managers listen. While spin isn't a tool that programmers are often good at, it's possible not to drop yourself in it.

Offering up the possibility of doing something without a code change is putting your head on the block if that turns out to have a risk that you didn't think of. The manager will never take the blame.



Vas Crabb
BOFH
Reged: 12/13/05
Posts: 4462
Loc: Melbourne, Australia
Send PM


Re: Neat try, but not really comparable. new [Re: ]
#308011 - 04/29/13 07:26 PM


> Offering up the possibility of doing something without a code change is putting your
> head on the block if that turns out to have a risk that you didn't think of. The
> manager will never take the blame.

Well, as a manager of developers, let me tell you that for one I tend to prefer code changes to workarounds, as you're better off actually fixing shit than adding support overhead, and if a developer does something that I made the final decision on, I will take the heat for it.

As a real-life example, I got a developer to implement a feature that reduced the need for manual clean-up of stale cache data on certain configuration changes (reducing support overhead). I went through a basic plan of how to implement it, let her code and and test it, then reviewed and signed off on the feature. Some time later, I overheard another developer criticising her over some implementation details. I interrupted and told him that if he had a problem it, he should raise it with me. I assigned and prioritised the issue, and I signed off on the implementation. If there was a problem with it, it would have been my problem.

Same goes for managing up - if higher-ups aren't happy with burn-down, projected schedules weren't being met, etc. if it's anything I've signed off on as a manager, it's officially my problem, and I have to justify the discrepancy. The developers shouldn't be exposed to that shit, they should just be allowed to get their job done without worrying about management crap, that's what their immediate manager is for.

Now if a developer isn't meeting expectations, that is an issue that has to be addressed, but one has to consider whether the problem is poor management, unrealistic expectations, unforeseen circumstances, or actual poor performance on the part of the developer. Management is a shit job. You're almost always going to be pissing someone off, but ultimately the manager's job is to be the one to take a hit for the team, and take the fall when things don't go to plan. It's about responsibility, and if you don't have that you shouldn't be a manager.



Olivier Galibert
Semi-Lurker
Reged: 09/21/03
Posts: 398
Send PM


Re: Neat try, but not really comparable. new [Re: Bryan Ischo]
#308097 - 04/30/13 03:23 PM


> Also, no offense, but I find your Army story a little implausable. Surely anyone
> implementing such a system would do some trial runs before demoing the thing. And
> surely you wouldn't just happen to have every trial run use the same photos of tanks
> as the others, and surely the additional photos of tanks used in the trial runs
> wouldn't just also happen to have enemy tanks always photoed later in the day than
> friendly tanks.

Never heard the one of the head recognition system that worked on photographs taken in one session? Funnily enough its performance crashed when taking new photographs... it was recognizing the clothes.


> And even if you were ... careless ... enough to only use photos
> sourced from the same places for all training, dry run, and demoing purposes, surely
> you had some way to inspect the program to see why it was making the choices it made
> (i.e. highlight on the photo those pixels which contributed to the friend/foe
> decision), and would have seen right away that the thing was always focusing on the
> leaf color or shiny surfaces facing west or whatever.

If you're capable of saying what a trained neural network actually bases its decisions on, there are some machine learning papers you should write that would be appreciated by the community.

OG.



Anonymous
Unregistered
Send PM


Re: Neat try, but not really comparable. new [Re: Vas Crabb]
#308098 - 04/30/13 03:30 PM


> It's about responsibility, and if you don't have that you shouldn't be a manager.

It's about taking responsibility when things go bad as well as when they go good. I know of managers who hold meetings and the only thing they ever agree on is that it wasn't their fault.

The shit ultimately gets pushed on the only people that can fix it, because they don't want anyone to know the project failed and of course if it all works out the manager still get all the glory. If it fails then all they have lost is the potential glory, because they sure as hell aren't going to take any blame.

When you tell your manager that you disagree and give a reason and they still tell you to do it there way and then it goes wrong and they blame you and then when you say you disagreed at the time but they wouldn't listen and they come back with "we didn't believe you because you didn't do a good enough job of convincing us". Then you know your manager is trying to piss on you and tell you it's raining..

Rant over.

Edited by smf (04/30/13 03:34 PM)



Anonymous
Unregistered
Send PM


Re: Neat try, but not really comparable. new [Re: Bryan Ischo]
#308112 - 04/30/13 07:45 PM


> And surely you wouldn't just happen to have every trial run use the same photos of tanks
> as the others, and surely the additional photos of tanks used in the trial runs
> wouldn't just also happen to have enemy tanks always photoed later in the day than
> friendly tanks.

Yeah all you do is phone up the enemy and ask them for a variety of pictures of tanks in different places at different times of day. I'm sure they'd get right on it.

Arranging pictures of your own tanks is probably just as tricky.


Pages: 1

MAMEWorld >> EmuChat
View all threads Index   Threaded Mode Threaded  

Extra information Permissions
Moderator:  Robbbert, Tafoid 
0 registered and 298 anonymous users are browsing this forum.
You cannot start new topics
You cannot reply to topics
HTML is enabled
UBBCode is enabled
Thread views: 2720