Back to blog

Running Python on Windows: A Beginner's Guide

Running Python on Windows: A Beginner's Guide
author avatar

Augustas Pelakauskas

2025-02-193 min read
Share

This tutorial shows how to run Python on Windows, focusing on web data extraction. It covers workspace setup, Python installation, configuring Visual Studio Code, and preparing web scraping code to extract product data from an e-commerce site.

Prepare your development workspace

A development workspace is the foundation of a coding environment. Let’s organize development files.

Project-based directory structure:

C:\Users\YourName\Projects\
    │── python_projects\
    │   │── project1\
    │   │── project2\
    └── venv\

To create this structure, open Command Prompt (cmd.exe) and execute:

mkdir C:\Users\%USERNAME%\Projects\python_projects
cd C:\Users\%USERNAME%\Projects\python_projects

Install Python

  1. Go to the official website and download the latest stable release.

  2. During installation, ensure to check Add Python to PATH. To verify installation, open Command Prompt and type:

python --version

# Expected output with a version number:
Python 3.X.X  

Verify package management functionality via Command Prompt:

pip --version

# Expected output:
pip 25.X.X from C:\Users\...\pip (python 3.X)

Install Visual Studio Code

In simple terms, VS Code is a text editor for source code. It’s considered to be among the best integrated development environments (IDEs). IDEs are helpful for debugging, syntax error highlighting, running code, and many other ease-of-use functions.

  1. Download VS Code.

  2. Install Python extension:

  • Open VS Code

  • Press Ctrl+Shift+X

  • Search for Python

  • Install Microsoft's Python extension

3. Configure Python interpreter:

  • Press Ctrl+Shift+P

  • Select Python: Select Interpreter

  • Choose your Python installation

4. Configure VS Code for Python development:

// settings.json
{
    "python.defaultInterpreterPath": "C:\\Users\\YourName\\Projects\\python_projects\\venv\\Scripts\\python.exe",
    "python.formatting.provider": "black",
    "editor.formatOnSave": true
}

Web scraping code example

Before running Python scripts, install the required packages. Requests for HTTP operations and Beautiful Soup for HTML parsing.

pip install requests beautifulsoup4

The following Python code uses freshly installed requests, Beautiful Soup libraries, and a built-in csv module.

Here's an example of web scraping from a mock e-commerce marketplace, extracting product titles and prices.

# Import required libraries
import requests                # For making web requests
from bs4 import BeautifulSoup  # For parsing HTML
import csv                     # For saving data

def scrape_products():
    """
    Scrapes product information from a web page and saves to CSV.
    """
    # Step 1: Get the web page
    url = 'https://sandbox.oxylabs.io/products/category/pc'
    webpage = requests.get(url)
    
    # Step 2: Parse HTML content
    soup = BeautifulSoup(webpage.text, 'html.parser')
    
    # Step 3: Find all products
    products = soup.find_all('div', class_='product-card')
    
    # Step 4: Save data to CSV file
    with open('products.csv', 'w', newline='') as file:
        writer = csv.writer(file)
        
        # Write header row
        writer.writerow(['Product Name', 'Price'])
        
        # Write product data
        for product in products:
            # Get product details
            name = product.find('h4').text.strip()
            price = product.find(class_='price-wrapper').text.strip()
            
            # Save to CSV
            writer.writerow([name, price])
    
    print("Check products.csv for the results.")

# Run the scraper
scrape_products()

This Python code exercises several basics:

  • HTTP GET request

  • HTML parsing

  • CSV data writing

The code creates a CSV file containing product information in your working directory. Each row contains a product title and its corresponding price.

For optimal performance in real-world scenarios, the Python code could, at the very least, include:

  • User-Agent headers to identify as an actual browser

  • Error handling for network and parsing issues

  • Retry logic or timeout configurations

  • Proxy support to avoid blocks

NOTE: It’s not the best idea to set up proxies on Windows through system settings, as the web scraper acts as a browser. The Python requests code should integrate proxies directly. 

Prebuilt scrapers

Prebuilt web scraper APIs automate and handle processes in web data collection architectures. Coding is reduced to a minimum as you can formulate a desirable task request without worrying about scripting complex processes.

Up-to-date scraper APIs handle complex JavaScript execution and dynamic content loading, rendering modern web applications with high accuracy.

The key pros of scraper APIs are saving time and reducing engineering resources.

Infrastructure management

  • Elimination of server maintenance overhead

  • Built-in handling of IP rotation and proxy management

  • Automatic scaling based on throughput requirements

Reliability engineering

  • Uptime management (typically 99.9%+)

  • Built-in retry mechanisms for failed requests

  • Automatic handling of CAPTCHAs and anti-bot measures

Data quality assurance

  • Structured output formats (JSON, CSV)

  • Consistent parsing of dynamic JavaScript content

  • Built-in error handling and reporting mechanisms

Oxylabs Web Scraper API

Claim a 1-week free trial and automate scraping with maintenance-free infrastructure.

  • 5K requests for FREE
  • No credit card is required

Wrap up

Python is arguably the simplest way to collect publicly available web data. With simple Python code, you can send an HTTP request, extract data, and structure it to make sense of HTML. It’s straightforward when targeting simple websites.

However, collecting large amounts of data from challenging targets, such as the most popular e-commerce websites, is increasingly difficult. You have to juggle many variables (proxies, headless browsers, JavaScript rendering, HTTP headers, retries) to obtain any kind of data before even considering bulk extraction at regular intervals.

For answers to Python topics, check top Python web scraping questions.

Learn how Python compares to other programming languages in web scraping:

Frequently asked questions

How do I run a Python program on Windows?

Install Python:

  • Download the official Python installer from python.org

  • Run the installer with Add Python to PATH enabled

Run a Python code using one of the following:

  • Command line (Command Prompt or PowerShell)

  • Integrated development environment (IDE)

  • Using IDLE (Python's built-in editor)

How do I run Python from the command line?

To run a Python code on Windows, open Command Prompt (cmd.exe), navigate to your program's directory using the cd command, type python filename.py, and press Enter.

Make sure Python installed and was added to your system's PATH environment variable first.

cd C:\Users\YourName\Projects
python_file.py

If it doesn't work, ensure Python is installed and added to your system's PATH environment variable. You can verify the installation by running:

python --version

How to run Python output in cmd?

To run Python output in Command Prompt on Windows:

  • Open Command Prompt (cmd.exe)

  • Navigate to your Python file's directory using: cd path\to\directory

  • Run the Python file with: python filename.py

cd C:\Users\YourName\Documents
python_fle.py

NOTE: Ensure Python is added to your system's PATH environment variable.

How to run the first Python program?

The simplest way to run your first program:

  • Download Python from python.org and install it (check Add Python to PATH during installation).

  • Open Notepad, write your Python code, and save it with a .py file extension.

  • Open a Command Prompt (cmd).

  • Navigate to your file's location using the cd command.

  • Run your program by typing python your_file.py.

About the author

author avatar

Augustas Pelakauskas

Senior Copywriter

Augustas Pelakauskas is a Senior Copywriter at Oxylabs. Coming from an artistic background, he is deeply invested in various creative ventures - the most recent one being writing. After testing his abilities in the field of freelance journalism, he transitioned to tech content creation. When at ease, he enjoys sunny outdoors and active recreation. As it turns out, his bicycle is his fourth best friend.

All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.

Related articles

Get the latest news from data gathering world

I'm interested