GitHub API rate limiting

Hi there, I’m currently working on a web app where I am using GitHub Api, but I later got to know that GitHub is implementing rate limiting… The limit is technically 60 request per hour and I burst these requests fairly quickly.

I have gone through some blogs but none seem to solve the problem. Is there a way I can increase the limit/requests.??

Hey there,

The 60 requests per hour rate limit is for a non-authenticated IP address. If you’re making more requests than that, I recommend setting up a personal access token so you can make authenticated calls. Doing so will give you a much larger rate limit of 5000 requests per hour.

Additionally, depending on the type of requests you’re making, you could set up a caching module to prevent making multiple calls for the same data.

Do note that if you set up a token, you’ll want to ensure it is kept secret - if your app is entirely front end, this is likely not possible.

Oi pode me ajudar com projetos de microservicos? Não estou conseguindo resolver .

@nhcarrigan thank you… I just set up the personal access token and and since I’m using fetch to make the request, is this demo code valid;

fetch(url, {
    method: 'GET',
    headers:{
      'Authorization': 'token MY_ACCESS_TOKEN',
    },
}).then(response  => {
    return response.json()
}).then(data => {
  console.log(data)
}).catch(error => {
console.log(error)
})

The web app is a simple search projects that gets a GitHub user repositories, no of followers & following .

This will require me to make multiple requests to the GitHub API and I think chaining the fetch request would be ideal (I may be wrong)… And I’m still scared these multiple fetch will still burst these requests quickly.

Do you think caching the modules will help tackle this and how do I go about the caching. You can help with a blog post or YouTube link that can help me get started.

This project is entirely FE

If the project is front end, you should not use your token - I would be able to see your token in the code if I visited your website, and that would mean I could interact with GitHub as you.

That being said, what you’ve described could be done in two API calls per user.

The first being to api.github.com/users/:username, which returns an object that has the follower counts you need:

{
  "login": "nhcarrigan",
  "id": 63889819,
  "node_id": "MDQ6VXNlcjYzODg5ODE5",
  "avatar_url": "https://avatars.githubusercontent.com/u/63889819?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/nhcarrigan",
  "html_url": "https://github.com/nhcarrigan",
  "followers_url": "https://api.github.com/users/nhcarrigan/followers",
  "following_url": "https://api.github.com/users/nhcarrigan/following{/other_user}",
  "gists_url": "https://api.github.com/users/nhcarrigan/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/nhcarrigan/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/nhcarrigan/subscriptions",
  "organizations_url": "https://api.github.com/users/nhcarrigan/orgs",
  "repos_url": "https://api.github.com/users/nhcarrigan/repos",
  "events_url": "https://api.github.com/users/nhcarrigan/events{/privacy}",
  "received_events_url": "https://api.github.com/users/nhcarrigan/received_events",
  "type": "User",
  "site_admin": false,
  "name": "Nicholas Carrigan (he/him)",
  "company": "@freeCodeCamp",
  "blog": "www.nhcarrigan.com",
  "location": "Washington, USA",
  "email": null,
  "hireable": null,
  "bio": "Open Source Scrivener and Bug-Hunter Errant @freeCodeCamp",
  "twitter_username": "nhcarrigan",
  "public_repos": 53,
  "public_gists": 2,
  "followers": 413,
  "following": 125,
  "created_at": "2020-04-18T02:23:22Z",
  "updated_at": "2021-11-01T17:08:31Z"
}

The second would be to api.github.com/users/:username/repos, to get a list of repository objects. If you only need the number of repositories, that’s included on the first call.

Then, as for caching, I’d use a Set that you can pass around.

const cache = new Set();

// pretend logic here for getting a user
function getUser(user) {
  const cachedData = cache.get(user);
  if (cachedData) return cachedData;
  fetch(url, // all of your logic here
  .then (data => {
    cache.set(user, data);
  }

@nhcarrigan , i used react for this project () and I have already rendered the numbers, you can check it out here:

devpadi.netlify.app

But I want to add more features for e.g when you click on the repositories card, I want to display all the repositories that user have created (GitHub only display a limit of 30 repo).

I tried fetching the URL(s ) separately

The first:

https://api.github.com/users/Lukas

The second:

https://api.github.com/users/Lukas/repos

but I got this error message: net::ERR_INSUFFICIENT_RESOURCES

I thought this was caused by the multiple requests I was making at the same hence, that is why I decided to chain the promises so one promise doesn’t run until the last is completed.

This may sound dumb, but what will the
function parameter user be in this case… I don’t really get it

Oi pode me ajudar com os projetos API de microservicos?

That’s just an example, since Github’s usernames are all unique, it can just be the user’s name. So Lukas will be the “unique key” you can use as a reference within a hash-map (or Map) (the cache object shown above)

its also worth noting that github’s usernames are case insensitive so lukas and Lukas and LUKAS and even LuKaS are all the same username, so your “key” can be any of them, but should all be of similar case so you don’t get confused.

If your project is only front-end you really only have 3 choices for caching:

  1. Cache your requests in a variable, this will cut down on subsequent requests. You could use a “hash-map”/Map on your client-side so if you get data for a user (like lukas) you never get their data again unless you refresh the page. At which point your app restarts, and thus needs to make the request again.

  2. Cache your requests in localStorage. This can allow you to store requests for a given amount of time within the browser, while cutting down on the overall requests. This is probably the most complex option as you need to read data form localStorage, or alternate browser storage options (like indexDB). This route is also probably the most flexible, as you have complex control of how long you want to manage the cache, and for what.

  3. Get the data at build-time. This is actually an entire pattern that is used for JAMStack apps, and used in frameworks like gatsby. Essentially your app will make HTTP requests at build time (allowing you to use a token saved locally/in your build environment securely) and then save the data that is returned into a static file that is then shipped and loaded by your app directly.
    This offers less flexibility than the other 2 options, but will use the least amount of API calls, and creates the fastest app possible. It’s great for scenarios where the data you want to load doesn’t change that often.

It’s hard to decide if any of these options are a good idea without knowing the full requirements of what you want to do. There’s also the possibility you’d be better off having a back-end of some kind to make the API requests for you.

I personally use option 3 with a static site generator for my sites that leverage my github profile API’s. As I don’t create that many repos that often so having some “stale” data isn’t that big of a deal.

Pode me ajudar com os projetos Microservice de api?

Oi pode me ajudar com os projetos de microservico de api?

@bradtaniguchi, thanks for your input… I decided to go the localStorage way for the caching but as amateur I’m facing some little challenges

How do I update the old value stored in the localStorage to a new one when a user is searched. … Kindly have a look at this codepen

https://codepen.io/Que0/pen/xxLpQzO