You are viewing a preview of this job. Log in or register to view more details about this job.

Machine Learning Operations & Data Engineering Intern

Join us from June 17 to August 21, 2026, and experience what it means to build a meaningful career rooted in purpose, strengthened by connection, and driven by opportunity. This 10-week program is designed to accelerate your learning and career exploration. You’ll gain hands-on experience in a publicly-traded, global real estate investment trust while developing the skills, confidence, and relationships that can shape your future.

 

Realty Income is looking for a Machine Learning Operations & Data Engineering Intern to join our in-house Predictive Analytics team for the summer of 2026. In this role, you will be responsible for collaborating with the Predictive Analytics team to build, validate, and optimize machine learning models, contributing to the entire development lifecycle from data preprocessing to model deployment. This internship offers a unique opportunity to apply theoretical knowledge to real-world projects, gaining hands-on experience in the rapidly evolving field of machine learning and data science.

 

What You Will Be Working On:

  • Collaborate in a modern analytics ecosystem, working with Databricks to engineer data pipelines, develop features, and train models using Spark and integrated MLflow tracking.
  • Develop and implement machine learning solutions by building, validating, and optimizing predictive models.
  • Design robust data workflows by contributing to the creation and maintenance of data ingestion, transformation, and validation processes that ensure integrity and reliability.
  • Build reusable analytics components, including modular code, reusable pipelines, and shared feature libraries to accelerate model development.
  • Monitor and improve performance by analyzing system and model efficiency, identifying bottlenecks, and implementing solutions.
  • Collaborate across disciplines by working closely with data scientists, engineers, and business stakeholders to translate analytical needs into production-ready models and dashboards.
  • Ensure code quality through Git version control and by creating unit and integration tests that maintain standards for reliability, traceability, and collaboration.
  • Support data access and governance by assisting in improving data documentation and maintaining compliance with internal data management practices.
  • Contribute to team initiatives by providing analytical and technical support on department-wide projects and prototypes.

 

What You Need to Be Successful:

We are looking for a student who will be a rising senior during the summer of 2026, pursuing a degree in Computer Science, Information Technology, Software Engineering, Machine Learning, or related field with a minimum GPA of 3.5.

  • Solid foundation and programming experience in Python, SQL, and Spark.
  • Practical experience with core ML and data libraries such as pandas, scikit-learn, and PySpark.
  • Experience using Git-based version control (GitHub or Azure DevOps).
  • Understanding of data structures, data modeling, and software architecture.
  • Familiarity with cloud computing platforms such as Azure or AWS.
  • Critical thinker with the ability to synthesize complex information and conceptualize solutions.
  • Ability to foster strong, collaborative relationships and communicate effectively at all levels.
  • A team player who displays self-confidence, encourages collaboration.
  • Demonstrated integrity and commitment to the highest ethical standards and personal values.
  • Self-motivation and initiative to organize and prioritize work.

 

This is a hybrid role with Tuesday, Wednesday, and Thursday required in-office.

 

The hourly compensation for this role is in the range of $21.00 - $25.00. Hourly rate determined by the candidate's skills, experience, knowledge, education, and abilities.