Actowiz Metrics Real-time
logo
analytics dashboard for brands! Try Free Demo
How-to-Extract-Product-Details-from-Costco-with-Python

Data scraping has become a vital tool for individuals and businesses in the data-driven world today, where the capability of collecting and analyzing data is vital for your success. It allows the scraping of essential data from websites, offering meaningful insights, well-informed decision-making capabilities, and a competitive edge.

In this blog, we'll understand how to extract product details from Costco with Python. Our emphasis will be on the "Electronics" category, having a detailed emphasis on the "Audio/Video" subcategory. We aim to scrape critical features like the product’s name, color, brand, connection type, item ID, price, model, categories, and description for every electronic device.

From this product category, the following features are scraped:

  • Product Name
  • Product URL
  • Brand
  • Category
  • Color
  • Connection Type
  • Description
  • Item Id
  • Model
  • Price

Let’s Start with Costco Product Data Scraping

Before we go through the code, we'll have to install some dependencies and libraries. We'll use Python for scraping and two well-known data scraping libraries: Selenium and Beautiful Soup. BeautifulSoup helps us parse XML and HTML documents, whereas Selenium automates web browsers for scraping and testing objectives.

We'll review the website structure when we install the libraries to identify the elements needed to scrape. It will involve studying the HTML code for a website and recognizing the particular attributes and tags with the information we're involved in.

With data in hand, we'll start writing Python code to extract the website.

We'll utilize Beautiful Soup for scraping data and Selenium for automating the browser actions required to extract the website. When we get the script, we'll run that and save the data in the CSV file for easy analysis.

Installation of Essential Packages:

Installation-of-Essential-Packages

Pandas is the library to do data manipulation & analysis. You can save and manipulate the extracted data from a website. We have utilized ‘pandas’ to convert data from the dictionary format into a DataFrame format that is more appropriate for data analysis and manipulation and to save DataFrame in the CSV format to make it easier to open and utilize in other software.

Another library is lxml for processing HTML and XML documents. It is utilized to parse XML or HTML content of a webpage. Here, we have used ‘lxml’ having ‘et’ to search and navigate the document's tree-like structure of HTML in which ‘et’ means Element Tree, a module in the ‘lxml’ library that offers an easy and effective way of working with HTML and XML documents.

BeautifulSoup is the library that makes extracting data from web pages more accessible. It helps you in parsing a webpage's XML or HTML content and scrapes the data you're involved with. A BeautifulSoup library is used here for parsing HTML content attained from a webpage.

Selenium is the library that helps you automate web browsers. You can use it to automate navigating and interacting with a webpage, like filling out forms or clicking buttons.

Webdriver is the package utilized by Selenium for interacting with web browsers. This helps you to control a browser and implement JavaScript commands. A Selenium library having web driver modules is used to automate interaction with the webpage by making an example of web drivers and navigating to a particular URL; this helps to get a source code of web pages that can be analyzed and parsed.

Installation-of-Essential-Packages-2

Creating an example of the web driver is among the most vial things you'll have to do while utilizing Selenium. A web driver is a class that interacts with any web browser like Firefox, Chrome, or Edge. In the given code snippet, we have created an example of a Chrome web driver by utilizing webdriver.Chrome(). The line of code permits us to control a Chrome browser and interrelate with webpages like any user would.

Using the power of web drivers and Selenium, you can solve the full potential of data scraping and automate data collection procedures like a professional! With a web driver, we could navigate various pages, interrelate with page elements, complete forms, click on buttons, and scrape the required information. We could automate tasks and collect data more effectively using this powerful tool.

Understand Data Scraping Functions

Now that we have a basic understanding of web scraping and the tools we use, we can dive into Let’s take a close look at the different functions we've created for a web scraping procedure. Creating functions helps reusability, code organization, and maintainability. It makes that easy to understand, update, debug, and codebase.

We'll clarify the objective of every function described and how it backs to the overall procedure.

Functions to Scrape Content:

Functions-to-Scrape-Content

A function named extract_content is made that takes one argument, URL, or uses Selenium to navigate to the URL, retrieve a page source, and parse that in the BeautifulSoup object with lxml parser is get passed into et.HTML() and convert to the Element Tree object. We could utilize a returned dom object to navigate and search an HTML document's tree-like structure and scrape information from a page.

Functions to Click on the URL:

Functions-to-Click-on-the-URL

This function utilizes a find_element() technique with By.XPATH for locating the “'Audio/Video” category link through the Costco electronics site and click() technique to navigate the page. This function helps us navigate the particular link on a website by clicking on that and scraping the page's content.

Functions-to-Click-on-the-URL-2

Function for Scraping Category Links:

Function-for-Scraping-Category-Links

The xpath() technique of a dom object is utilized to get all the elements which match the detailed xpath expression. Now, the xpath helps to choose all “href” attributes of “a” elements which are successors of elements with having class "categoryclist_v2". Upon navigating an Audio/Video category, the function scrapes links of 4 displayed subcategories, permitting more scraping on particular pages.

Function-for-Scraping-Category-Links-2

Function to Extract Product Links:

With four subcategory links attained, we will extract all product links under these categories.

Function-to-Extract-Product-Links

This function utilizes category_links() with extract_content() functions formerly defined to steer every subgroup page and scrape links of all products available under every subgroup. The function utilizes the xpath() technique of a content object for selecting all product links through given xpath expressions that choose all “href” attributes of “a” elements, which are successors of elements having automation-id "productList" and “href” characteristic ends with the ".html."

Function for Scraping Product Name:

With links to products attained, we will extract every product's required features. A function utilizes a try-except block for handling any errors which might occur when scraping the features.

Function-for-Scraping-Product-Name

Inside a try block, a function utilizes a dom object's xpath() technique for selecting text of an element with a class called "product-title." If a product’s name is not accessible, a function assigns a value called "Product name is not available" to a 'product_name' support within a dataframe “data” at the place of the current product.

The function of Scraping a Product Brand:

The-function-of-Scraping-a-Product-Brand

The function utilizes a dom object's xpath() technique for selecting a text of elements with an itemprop "brand." If a brand name is not accessible, a function assigns a value "Brand is not available" to a column “brand.”

Function for Extracting the Product Price:

Function-for-Extracting-the-Product-Price

The function utilizes a dom object's xpath() technique. In case the pricing is not accessible, the function allocates a value named "Price is not available" to a column “price.”

Function for Extracting Item Id of a Product:

Function-for-Extracting-Item-Id-of-a-Product

This function utilizes an xpath() technique of a dom object for selecting the text of an element with an id "item-no.” If a product id is not accessible, a function assigns a value named "Item Id is not available" to a column named “item_id.”

Function for Extracting Product Description:

Function-for-Extracting-Product-Description

The function utilizes a dom object's xpath() technique for selecting the text of an element with automation-id called "productDetailsOutput." If a product description is not accessible, a function assigns a value called "Description is not available" to a “description” column.

Function for Extracting a Product Model:

This function utilizes a dom object's xpath() technique for selecting the text of an element with an id called "model-no." If a product model is not accessible, a function assigns a value called "Model is not available" to a “model” column.

Function-for-Extracting-a-Product-Model

Function for Extracting Product’s Connection Type

Function-for-Extracting-Product-Connection-Type

The function utilizes the xpath() technique of a dom object for selecting the text of the initial div element following the sibling of an element with the text called "Connection Type." If a product connection type is not accessible, a function assigns a value called "Connection type is not available" to a 'connection_type' column.

Function for Extracting Product’s Category Type

Function-for-Extracting-Products-Category-Type

The function utilizes the xpath() technique of a dom object for selecting the text of the 10th element having item prop called "name." In case a product category is not accessible, the function allocates the value called "Category is not available" to a 'category' column.

Function for Extracting Product’s Color:

Function-for-Extracting-Product-Color

This function utilizes the xpath() technique to choose the text of the initial div element, the subsequent sibling of an element containing the text "Color." If a product color isn’t accessible, the function allocates the "Color is not available" value to a 'color' column.

Starting the Extraction Procedure: Bringing That All Together

With the accomplishment of defining necessary functions, we would now start the scraping procedure by successively calling all previously made functions to regain the wanted data.

Starting-the-Extraction-Procedure-Bringing-That-All-Together

The initial step is navigating to Costco's electronic category page utilizing a webdriver and detailed URL. Then, we will utilize the click_url() function for clicking on the Audio or Video category to scrape the HTML content of a page.

Starting-the-Extraction-Procedure-Bringing-That-All-Together-2

To save the extracted data, we would create the dictionary having required columns like 'product_url,' 'brand,' 'item_id,' 'color,' 'product_name,' 'price,' 'model,' 'category,' 'connection_type,' 'description.' Then we will make a dataframe with this dictionary, called 'data,' to store all the extracted data.

Starting-the-Extraction-Procedure-Bringing-That-All-Together-3

The script here calls a product_links(url_content) function that scrapes links of products available under 4 subcategories of an Audio or a Video category. All these links are added to a 'product_url' column in dataframe 'data.'

Starting-the-Extraction-Procedure-Bringing-That-All-Together-4

The code here iterates via every product in a 'data' dataframe, scraping product URLs from a 'product_url' column and using the extract_content() function for retrieving HTML content about a product page. Then, it calls formerly defined functions for scraping particular features like brand, model, price, color, connection type, category, description, item id, product name, etc., and assigns values to respective columns of a dataframe at detailed index, efficiently scraping all needed data for every product.

Starting-the-Extraction-Procedure-Bringing-That-All-Together-5

Using this last line of code, a dataframe 'data' having all the extracted data for every product gets exported to the CSV file called 'costco_data.csv.' It allows easy access to the manipulation of extracted data for more use or analysis.

Conclusion

By mastering all the fundamentals of web extraction, you can reveal a world of essential data that you can use for an extensive range of applications, including market research, data analysis, and more. With the capability to scrape and analyze data from a website, the opportunities are endless!

We believe this blog has given you a strong foundation of web scraping methods and inspired you to explore many possibilities which web scraping needs to provide. Therefore, what do you want? Start searching and see which insights you can discover with the influence of web scraping.

Ready to experience the power of data scraping for your business? Contact Actowiz Solutions now! You can also reach us for your mobile app scraping and web scraping service requirements.

Social Proof That Converts

Trusted by Global Leaders Across Q-Commerce, Travel, Retail, and FoodTech

Our web scraping expertise is relied on by 4,000+ global enterprises including Zomato, Tata Consumer, Subway, and Expedia — helping them turn web data into growth.

4,000+ Enterprises Worldwide
50+ Countries Served
20+ Industries
Join 4,000+ companies growing with Actowiz →
Real Results from Real Clients

Hear It Directly from Our Clients

Watch how businesses like yours are using Actowiz data to drive growth.

1 min
★★★★★
"Actowiz Solutions offered exceptional support with transparency and guidance throughout. Anna and Saga made the process easy for a non-technical user like me. Great service, fair pricing!"
TG
Thomas Galido
Co-Founder / Head of Product at Upright Data Inc.
2 min
★★★★★
"Actowiz delivered impeccable results for our company. Their team ensured data accuracy and on-time delivery. The competitive intelligence completely transformed our pricing strategy."
II
Iulen Ibanez
CEO / Datacy.es
1:30
★★★★★
"What impressed me most was the speed — we went from requirement to production data in under 48 hours. The API integration was seamless and the support team is always responsive."
FC
Febbin Chacko
-Fin, Small Business Owner
icons 4.8/5 Average Rating
icons 50+ Video Testimonials
icons 92% Client Retention
icons 50+ Countries Served

Join 4,000+ Companies Growing with Actowiz

From Zomato to Expedia — see why global leaders trust us with their data.

Why Global Leaders Trust Actowiz

Backed by automation, data volume, and enterprise-grade scale — we help businesses from startups to Fortune 500s extract competitive insights across the USA, UK, UAE, and beyond.

icons
7+
Years of Experience
Proven track record delivering enterprise-grade web scraping and data intelligence solutions.
icons
4,000+
Projects Delivered
Serving startups to Fortune 500 companies across 50+ countries worldwide.
icons
200+
In-House Experts
Dedicated engineers across scrapers, AI/ML models, APIs, and data quality assurance.
icons
9.2M
Automated Workflows
Running weekly across eCommerce, Quick Commerce, Travel, Real Estate, and Food industries.
icons
270+ TB
Data Transferred
Real-time and batch data scraping at massive scale, across industries globally.
icons
380M+
Pages Crawled Weekly
Scaled infrastructure for comprehensive global data coverage with 99% accuracy.

AI Solutions Engineered
for Your Needs

LLM-Powered Attribute Extraction: High-precision product matching using large language models for accurate data classification.
Advanced Computer Vision: Fine-grained object detection for precise product classification using text and image embeddings.
GPT-Based Analytics Layer: Natural language query-based reporting and visualization for business intelligence.
Human-in-the-Loop AI: Continuous feedback loop to improve AI model accuracy over time.
icons Product Matching icons Attribute Tagging icons Content Optimization icons Sentiment Analysis icons Prompt-Based Reporting

Connect the Dots Across
Your Retail Ecosystem

We partner with agencies, system integrators, and technology platforms to deliver end-to-end solutions across the retail and digital shelf ecosystem.

icons
Analytics Services
icons
Ad Tech
icons
Price Optimization
icons
Business Consulting
icons
System Integration
icons
Market Research
Become a Partner →

Popular Datasets — Ready to Download

Browse All Datasets →
icons
Amazon
eCommerce
Free 100 rows
icons
Zillow
Real Estate
Free 100 rows
icons
DoorDash
Food Delivery
Free 100 rows
icons
Walmart
Retail
Free 100 rows
icons
Booking.com
Travel
Free 100 rows
icons
Indeed
Jobs
Free 100 rows

Latest Insights & Resources

View All Resources →
thumb
Blog

Struggling With Dynamic Pricing? Use Ola Price Data Scraping And Fare Intelligence For Smarter Decisions

Struggling with dynamic pricing? Use Ola price data scraping and fare intelligence to gain real-time insights and optimize ride pricing strategies.

thumb
Case Study

How We Helped a Travel Brand Solve Pricing Inconsistencies with Real-Time Hotel Deals Data Extraction and Travel Pricing Intelligence

How we solved pricing inconsistencies using Real-Time Hotel Deals Data Extraction and Travel Pricing Intelligence to optimize rates, ensure parity, and boost revenue

thumb
Report

Scrape 10 Largest Food Chains Data in the United States in 2026 – Market Share, Pricing & Consumer Trends

Scrape 10 Largest Food Chains Data in the United States in 2026 to track pricing, market share, and consumer trends with real-time insights.

Start Where It Makes Sense for You

Whether you're a startup or a Fortune 500 — we have the right plan for your data needs.

icons
Enterprise
Book a Strategy Call
Custom solutions, dedicated support, volume pricing for large-scale needs.
icons
Growing Brand
Get Free Sample Data
Try before you buy — 500 rows of real data, delivered in 2 hours. No strings.
icons
Just Exploring
View Plans & Pricing
Transparent plans from $500/mo. Find the right fit for your budget and scale.
Get in Touch
Let's Talk About
Your Data Needs
Tell us what data you need — we'll scope it for free and share a sample within hours.
  • icons
    Free Sample in 2 HoursShare your requirement, get 500 rows of real data — no commitment.
  • icons
    Plans from $500/monthFlexible pricing for startups, growing brands, and enterprises.
  • icons
    US-Based SupportOffices in New York & California. Aligned with your timezone.
  • icons
    ISO 9001 & 27001 CertifiedEnterprise-grade security and quality standards.
Request Free Sample Data
Fill the form below — our team will reach out within 2 hours.
+1
Free 500-row sample · No credit card · Response within 2 hours

Request Free Sample Data

Our team will reach out within 2 hours with 500 rows of real data — no credit card required.

+1
Free 500-row sample · No credit card · Response within 2 hours