Skip to main content

Are there any recommended best practices for scaling the Censys Python API calls?

I am interested in the CWMP protocol. The following query will return 10k of the 157k. If I ran this daily search to track that service, I would quickly burn through the allocated quota.

query = h.search("services.service_name: CWMP", per_page=100, pages=100)

Does the road map contain feasible bulk or stream API options for large data sets?

Hey John, 

Morgan from the product team here.  We have few options for you. Are you trying to regularly monitor all CWMP on the internet?  Or is this is something where you need that data at this specific point-in-time?

Morgan


Hi Morgan,

Once a day, I check insecure protocols for specific ASNs to get a list of potentially impacted hosts. I have a database that I verify against to see if ports have opened/closed, patched, etc.

It has helped track the risk and attack surface over time. When I started SNMP, for example, it had around 80k endpoints for my typical query.

I would appreciate any recommendations that make this more efficient.

Thanks,

John


Reply