Blog

Generating Python Code with Transformers

2021-03-10

In Karpathy's famous essay The Unreasonable Effectiveness of Recurrent Neural Networks his LSTM based network generates C code after training on the Linux source. This demonstration was sufficiently impressive that every now and then I'll revisit how well current language models can generate code.

For this I scraped python code from github and trained on a variety of transformers based networks. It looks pretty good:

def find_match(pathname):
    # -- Newline characters help
    try:
        newline_line = dirname.encode('utf-8'))
    except UnicodeError as ex:
        raise AssertionError('Invalid option "%s" for separator. '
                                  'pathname must be an ASCII string')

    # This is the first time this checks a Windows prompt.
    # On Windows, if you don't have any nested types in start_when, the
    # *option* is not stored.
    newline_line = str(str(str(ex))[1])[0]
    if newline_line is None:
        return None
    self._add_option(newline_line)

def _from_option_vars(self):
    # -- Default command for all command line arguments
    return self.line_argument_group.args

def _get_number(self):
    # -- Mark as variable name for how to call the argument.
    return self.int_argument_group[::1]

def _get_control_title(self):
    # -- Controls a char string for horizontal content
    for command in self.command_list:
        if command in getchar_string(command):
            continue
        if getchar_string(command):
            command.title = command
    return command

More

Danbooru Utility

2021-03-09

I made Danbooru Utility to make working with gwern's Danbooru20XX dataset easy.

It can explore the dataset, filter by tags, rating, and score, detect faces, and resize the images. I've been using it to make datasets for gan training.

Problem Solving with Deep Q-Learning at IEEE RoboResearch

2016-06-01 10:00

Download the slides here, or with:

git clone https://github.com/reidsanders/dl-talk.git

Follow along on the slides as the video runs:

Deep Learning Talk at the Triangle IEEE Robotics and Automation group

2016-02-05

Download the slides with

git clone https://github.com/reidsanders/dl-talk.git

Follow along on the slides as the video runs:

I glossed over a lot by necessity. I hope if you are interested you will try it yourself by following a tutorial or running an existing project.

If it was me, I'd start with:

Implementing a Neural Network from Scratch.

If that appealed, I'd continue on with the tutorials on that blog. If I wanted something with more breadth, I'd go to:

Deep Learning Tutorials.

If I wanted to learn machine learning from the ground up, I'd go here:

Stanford Machine Learning Course

UFLDL Tutorial: A Matlab/Octave, fairly math heavy tutorial.

For Deep Learning in depth, I'd go here:

Neural Networks and Deep Learning: a more in depth textbook.

A great community:

Machine Learning Subreddit

A good source of new datasets:

Datasets Subreddit

Why has someone been paying 100x market price for GPU instances?

2015-07-24

For the past 10 weeks EC2 GPU spot instances on US East have been going for 15-100x the price in other regions.

I've been using Amazon's GPU instances for running deep neural network's, and have been quite impressed by the ease and cost. Spot bids, where your instance is terminated if the bid price goes above your maximum bid, often go for 5-10x less than on demand instances. Recent advances in deep learning have relied upon large neural networks using high end GPU's, but getting your own hardware is expensive, and a big lock in. GPU instances are quite affordable, scalable, and the spot bid prices reliable enough to minimize the risk of having a training run cut short.

One mystery struck me when I was looking at for the best region to run these instances on. Who is spending 100x the going spot price on g2.2xlarge instances in US East?

g2.2xlarge spot price in US East

Beginning on May 7, and ending on May 29, the spot price for g2.2xlarge instances in us-east-1e was $6.00.

When I restricted my bid to other availability zones.

aws ec2 request-spot-instances --spot-price 0.50 --launch-specification "{\"KeyName\": \"my-key\", \"SecurityGroups\": [\"myip\"],\"ImageId\": \"ami-0f53a04b\",\"InstanceType\": \"g2.2xlarge\", \"Placement\":{\"AvailabilityZone\":\"us-east-1a\"}}"

I found they had no instances at all.

Compare this to the going price of $0.065 in us-west-1, and other regions. On May 29, the price dropped to $2.60, still twenty times the rate in us-west-1. On the twenty-third of July, this price briefly spiked back up to $5.00, then dropped to the price of on demand instances, $0.65.

Why would anyone pay ten times the on demand rate, and one hundred times the spot bid price? On demand instances would be much more reliable, as Amazon can drop your spot instance whenever it wants. Moreover, this customer is taking the entire available supply for US East, Amazon's largest region. Such a huge customer would also be able to make a long term deal for a lower rate.

Looking at these charts, you might notice occasional spikes to $5.00 or $6.00. As this forum thread indicates, these price spikes have been going on for a while. Apparently this is caused by companies that don't want to lose their instance, and are willing to occasionally pay significantly more than the market rate for a long term cheaper solution. Sometimes the spot supply shrinks, and all that is left is a few high end bidders. In many cases they would have saved money, but a spike lasting ten weeks makes this strategy financially inefficient. So why did this happen? Here's a few theories.

  • Some company or government set the price way higher than they expected the price to ever reach (>=$6.00) to prevent being outbid, and losing their instance.

    • Then someone else did the same thing (=$6.00).

    • They accidentally bid against themselves.

  • They simply didn't notice the price spike never went down, and have been paying enormous prices for months.

  • The spot instance availability for this entire time period dropped so much that there are only a few instances being paid for. Still unnecessarily expensive, but if they are running something that can't be interrupted, it might make sense. I still recommend not running critical, month long operations on spot instances.

  • Someone thought they were bidding in cents, not dollars.

  • Actually, they are so price insensitive they don't care about expanding to other regions, using on demand, reserved, or g2.8xlarge instances, or buying their own hardware.

  • It's something Amazon is doing.

    • Maybe Amazon sources on demand instances from the spot pool by simply bidding very high. This would be an unusual hack, but it's not unthinkable.

    • Maybe Amazon doesn't actually have any g2.2xlarge instances in the US East spot pool. They did have them in the on demand pool.

    • It's a bug.

If you have any insight, let me know. I'm asking Amazon on twitter, and I'll update if I learn something new.

Parameterized Would You Rather and Photo Mosaic Creator

2013-04-16

I'll be gradually posting my existing projects on this page, as well as writings and links.

Ever seen the photo mosaics that form a large photo out of lots of smaller ones? This lets you do the same thing with your own photos. Try it out yourself.

You can also have some fun with parameterized would you rather. It takes the standard would you rather questions, and balances the parameters to make the options equally appealing. Specifically it tries to maintain an even split between the choices. You can also submit your own questions!