Mining GitHub Repository Information using the Official REST API

2 minute read


GitHub provides a (not very convinent and well documented) HTTP API for requesting information from GitHub. We can use for requesting repository information in JSON format. You can apply various search conditions and sort them if necessary. For example, if you want to collect 1000 most starred repositories whose language is Java, you can use the following request.

See the following links for a complete documentation.


However, there are several restrictions (restriction 1 is not documented):

  1. Only one page of results (30) are returned for each request
  2. You are limited to send only 10 requests per minute (if authenticated, 30 requests per minute).
  3. You can only get up to 1000 search results for one set of given conditions.

Therefore, you cannot get more than 1000 results for a given search request, limiting the scale of possible analysis. You also cannot send more than 10 requests per minute. Also, you have to fetch results page by page using the page parameter, using this list of URL

Note that the maximum page number is 34 due to the 1000 result restriction.

If you made any error during the request, the error message will be in the message field in the returned JSON object. Otherwise, the array of repository information will be in the item field.

Example Implementation in Python

Returns a json object that contains information of GitHub repos returned by GitHub REST v3 API

Example search url:
This URL collects GitHub Java project sorted by starts in descending order.
Remember that:
1. The results are returned in pages, so you have to fetch them page by page
2. You are limited to send only 10 requests per minute
3. You can only get up to 1000 search results

Reference Documentation:
def get_repolist_by_stars(num=30, lang=''):
    url = ''
    params = {'q':'stars:>1000', 'sort':'stars', 'order':'desc', 'page':'1'}
    repolist = []

    if lang != '':
        params['q'] = 'language:' + lang

    print('Sending HTTP requests to GitHub, may need several minutes to complete...')

    for i in range(1, int(num / 30) + 2):
        params['page'] = str(i)
        json = requests.get(url, params).json()
        if json['items'] == None:
            print('Error: No result in page ' + str(i) + '!') 
            print('Message from GitHub: ' + str(json.get('message')))


        print('Downloaded repository information in page ' + str(i))
        time.sleep(7) # This rate is imposed by GitHub

    return repolist[0:num]