Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The mission of Roku's Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices. About the role Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TV™ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office. What you'll be doing Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service Build data pipelines that optimise on data quality and are resilient to poor quality data sources Own the data mapping, business logic, transformations and data quality Low level systems debugging, performance measurement & optimization on large production clusters Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Maintain and support existing platforms and evolve to newer technology stacks and architectures We're excited if you have Extensive SQL Skills Proficiency in at least one scripting language, Python is required Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc. Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures. Experience with AWS, GCP, Looker is a plus Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables 5+ years professional experience as a data or software engineer BS in Computer Science; MS in Computer Science preferred AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Work on one or more projects, making contributions to unfamiliar code written by team members. Diagnose and resolve performance issues. Participate in the estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Document code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Write, debug, and troubleshoot code in mainstream open source technologies Lead effort for Sprint deliverables, and solve problems with medium complexity Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years working experience software development using multiple versions of Python. Experience and familarity with the various Python frameworks currently in use to leverage software development processes. Develop, test, and deploy high-quality Python code for AI/ML applications, data pipelines, and backend services. Design, implement, and optimizem Machine Learning models and algorithms for various business problems. Collaborate with data scientists to transition experimental models into production-ready systems. Build and maintain robust data ingestion and processing pipelines to feed data into ML models. Perform code reviews, provide constructive feedback, and ensure adherence to best coding practices. Troubleshoot, debug, and optimize existing ML systems and applications for performance and scalability. Stay up-to-date with the latest advancements in Python, machine learning, and related technologies. Document technical designs, processes, and operational procedures. Experience with Cloud technology: GCP or AWS What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others. Source code control management systems (e.g. Git, Github). Agile environments (e.g. Scrum, XP). Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern Python versions
Posted 1 week ago
6.0 - 8.0 years
19 - 35 Lacs
Hyderabad
Work from Office
We are hiring a Senior Data Engineer with 6- 8 years of experience Education:- Candidates from premier institutes like IIT, IIM, IISc, NIT, IIIT top- ranked institutions in India are highly encouraged to apply.
Posted 1 week ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Job Posting TitleSR. DATA SCIENTIST Band/Level5-2-C Education ExperienceBachelors Degree (High School +4 years) Employment Experience5-7 years At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview Solves complex problems and help stakeholders make data- driven decisions by leveraging quantitative methods, such as machine learning. It often involves synthesizing large volume of information and extracting signals from data in a programmatic way. Roles & Responsibilities Key Responsibilities Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. Desired Candidate Minimum Qualifications M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Preferred / NicetoHave Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (BigQuery, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more atwww.te.com and onLinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site.
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for MLops Engineer@ Gurgaon Location Job Responsibilities: Design and implement CI/CD pipelines for machine learning workflows. Develop and maintain production-grade ML pipelines using tools like MLflow, Kubeflow, or Airflow. Automate model training, testing, deployment, and monitoring processes. Collaborate with Data Scientists to operationalize ML models, ensuring scalability and performance. Monitor deployed models for drift, degradation, and bias, and trigger retraining as needed. Maintain and improve infrastructure for model versioning, artifact tracking, and reproducibility. Integrate ML solutions with microservices/APIs using FastAPI or Flask. Work on containerized environments using Docker and Kubernetes. Implement logging, monitoring, and alerting for ML systems (e.g., Prometheus, Grafana). Champion best practices in code quality, testing, and documentation. Required Skills: 7+ years of experience in Python development and ML/AI-related engineering roles. Strong experience in ML Ops tools like MLflow, Kubeflow, Airflow, or similar. Deep understanding of Docker, Kubernetes, and container orchestration for ML workflows. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform/CDK). Familiarity with model deployment and serving frameworks (e.g., Seldon, TorchServe, TensorFlow Serving). Good understanding of DevOps practices and CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Experience with data versioning tools (e.g., DVC) and model lifecycle management. Exposure to monitoring tools for ML and infrastructure healt Experience:7-12 Yrs Job Location :Gurgaon Interested candidates can share your CV to mangani.paramanandhan@bounteous.com, i will call you shortly. Please share your CV.
Posted 1 week ago
7.0 years
30 - 45 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development and growth of our Tax Compliance product suite . In this role, you’ll shape innovative digital solutions that simplify and automate tax filing, reconciliation, and compliance workflows for businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and competitive market, impacting how businesses meet their statutory obligations with speed, accuracy, and confidence. As the Sr. Staff Engineer, you’ll work closely with product, DevOps, and data teams to architect reliable systems, drive engineering excellence , and ensure high availability across our platform. We’re looking for a technical leader who’s not just an expert in building scalable systems, but also passionate about mentoring engineers and shaping the future of fintech. Responsibilities Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on technical lead). Drive the design and development of scalable backend services using Python. Experience in Django, FastAPI, Task Orchestration Systems. Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable deployments. Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset. Collaborate cross-functionally with product managers, designers, and compliance experts to deliver features that make tax compliance seamless for our users. Set and enforce engineering best practices, code quality standards, and operational excellence. Stay up-to-date with industry trends and advocate for continuous improvement in engineering processes. Experience in fintech, tax, or compliance industries. Familiarity with containerization tools like Docker and orchestration with Kubernetes. Background in security, observability, or compliance automation. Requirements 7+ years of software engineering experience, with at least 2+ years in a leadership or principal-level role. Deep expertise in Python, including API development, performance optimization, and testing. Experience in Event-driven architecture, Kafka/RabbitMQ-like systems. Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch). Solid understanding of Terraform for infrastructure as code. Proficiency with Jenkins or similar CI/CD tooling. Comfortable balancing technical leadership with hands-on coding and problem-solving. Strong communication skills and a collaborative mindset. Skills:- Python, Django, FastAPI, PostgreSQL, MongoDB, Redis, Apache Kafka, RabbitMQ, AWS Simple Notification Service (SNS), AWS Simple Queuing Service (SQS), Amazon Web Services (AWS), Systems design, Apache Airflow and Celery
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Big Data Engineer (AWS-Scala Specialist) Location: Greater Noida/Hyderabad Experience: 5-10 Years About the Role- We are seeking a highly skilled Senior Big Data Engineer with deep expertise in Big Data technologies and AWS Cloud Services. The ideal candidate will bring strong hands-on experience in designing, architecting, and implementing scalable data engineering solutions while driving innovation within the team. Key Responsibilities- Design, develop, and optimize Big Data architectures leveraging AWS services for large-scale, complex data processing. Build and maintain data pipelines using Spark (Scala) for both structured and unstructured datasets. Architect and operationalize data engineering and analytics platforms (AWS preferred; Hortonworks, Cloudera, or MapR experience a plus). Implement and manage AWS services including EMR, Glue, Kinesis, DynamoDB, Athena, CloudFormation, API Gateway, and S3. Work on real-time streaming solutions using Kafka and AWS Kinesis. Support ML model operationalization on AWS (deployment, scheduling, and monitoring). Analyze source system data and data flows to ensure high-quality, reliable data delivery for business needs. Write highly efficient SQL queries and support data warehouse initiatives using Apache NiFi, Airflow, and Kylo. Collaborate with cross-functional teams to provide technical leadership, mentor team members, and strengthen the data engineering capability. Troubleshoot and resolve complex technical issues, ensuring scalability, performance, and security of data solutions. Mandatory Skills & Qualifications- ✅ 5+ years of solid hands-on experience in Big Data Technologies (AWS, Scala, Hadoop and Spark Mandatory) ✅ Proven expertise in Spark with Scala ✅ Hands-on experience with: AWS services (EMR, Glue, Lambda, S3, CloudFormation, API Gateway, Athena, Lake Formation) Share your resume at Aarushi.Shukla@coforge.com if you have experience with mandatory skills and you are an early.
Posted 1 week ago
10.0 - 15.0 years
35 - 50 Lacs
Bengaluru
Hybrid
What the job involves You'll be joining a fast-growing, motivated, and talented team of engineers who are building innovative products that are transforming the mobile marketing industry. Our solutions enable clients to measure the effectiveness of their campaigns in novel and impactful ways. Working closely with our existing team of software engineers, you will contribute to the ongoing enhancement of our product suite. This includes adding new features to existing systems and helping to develop new systems that support upcoming product offerings. As a Backend Technical Lead , you will play a pivotal role in designing, building, and maintaining scalable, observable backend systems using Golang and modern cloud-native architectures . You will lead hands-on development while also guiding the team in adopting best practices in monitoring, logging, tracing , and system reliability . Operational excellence is also a part of this role. To ensure the continued stability and performance of our systems, you will be expected to participate in the on-call rotation. This responsibility includes responding to incidents, troubleshooting production issues, and working with the team to implement long-term fixes. Your involvement will be critical in maintaining uptime and providing a seamless experience for our customers. Additionally, you will help drive improvements to our alerting systems and incident response processes to reduce noise and enhance efficiency. Who you are Required Skills: 10+ years of software engineering experience , including 4+ years with Golang , focused on building high-performance backend systems. Hands-on experience with messaging platforms such as Apache Pulsar , NATS JetStream , Kafka , or similar pub/sub or streaming technologies. Strong knowledge of observability practices , including instrumentation, OpenTelemetry , Prometheus , Grafana , logging pipelines (e.g., Loki , ELK stack ), and distributed tracing . Proficient in REST API development , goroutines , and Go concurrency patterns . Deep understanding of microservices architecture and containerized deployments using Docker and Kubernetes . Experience with cloud platforms such as GCP , AWS , or Azure . Strong database skills: MySQL (mandatory), with additional exposure with additional exposure to Distributed databases (e.g., Spanner) Proven ability to optimize applications for maximum performance and scalability . Solid experience in maintaining production systems , with the ability to quickly debug and resolve issues both during development and in live environments. Optional Skills: Understanding of adtech , programmatic advertising, or mobile attribution systems. Experience using AI tools (e.g., GitHub Copilot and Claude Code) to assist development. Soft Skills: Demonstrates strong ownership and the ability to work independently within a distributed, remote team. Possesses excellent problem-solving skills and a deep appreciation for clean, testable, and maintainable code . Eager to learn new technologies and explore unfamiliar domains. Comfortable mentoring team members and leading by example. Cares deeply about code quality ; understands the importance of thorough testing and maintaining high standards. Collaborates effectively with a remote, international team.
Posted 1 week ago
3.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Greetings !!! Hiring for GCP Data Engineer for Hyderabad Location. Experience - 3 to 8 Skills :- GCP, Pyspark, DAG, Airflow, Python, Teradata (Good to Have) Job location - Hyderabad (WFO) Interested one can share their profiles to anmol.bhatia@incedoinc.com
Posted 1 week ago
3.0 - 7.0 years
15 - 30 Lacs
Bengaluru
Hybrid
What the job involves You will be joining a fast-growing team of motivated and talented engineers, helping us to build and enhance a suite of innovative products that are changing the mobile marketing industry by enabling our clients to measure the effectiveness of their campaigns in a completely novel way. Working closely with our existing team of software engineers, you will contribute to improving our product suite. You will do this by adding new features to our existing systems and helping create new systems to facilitate new product offerings. You'll work under the mentorship of a lead software engineer who will support you and manage your onboarding and continuous professional development needs. We operate a blameless culture with a flat organizational structure where all input and ideas are welcome; we make decisions fast and value good ideas over seniority, so everyone in the team can make a real difference in product evolution. Who you are Required Skills: You have at least 3-7 years of commercial experience with software engineering in Golang, including REST API development and a strong understanding of data structures and concurrency using goroutines. Hands-on experience with messaging platforms such as Apache Pulsar , Kafka, Pubsub. Good knowledge of observability practices , including instrumentation, OpenTelemetry , Prometheus , Grafana , logging pipelines Experience in relational databases (e.g. MySQL) is a must, exposure to GraphQL, NoSQL databases (e.g. Mongo) would be an advantage. Youve worked with microservice architectures with a good appreciation of performance and quality requirements, Hands-on experience with Docker containers, Kubernetes. Experience working with any cloud platforms such as GCP, AWS, Azure. Experience in data engineering technologies like ELT/ETL workflows, Kafka, Airflow, etc. Optional Skills: Any experience with C# or Python or NodeJS Understanding of the adtech landscape and familiarity with mobile advertising measurement solutions is highly desirable. Experience using AI-powered coding assistants like GitHub CoPilot, etc. Soft Skills: You enjoy new challenges and gain satisfaction from solving interesting problems in a wide range of areas You care deeply about the quality of your work; you are sensitive to the importance of testing your code thoroughly and maintaining it to a high standard You dont need to be micromanaged; Youll ask for help when you need it but you can apply initiative to solve problems on your own You are enthusiastic about broadening your skill set; you are willing and able to quickly learn new techniques and technologies You know how to collaborate effectively with a remote, international team
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Manager Software Engineer Overview We are the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. Our Team Within Mastercard – Data & Services The Data & Services team is a key differentiator for Mastercard, providing the cutting-edge services that are used by some of the world's largest organizations to make multi-million dollar decisions and grow their businesses. Focused on thinking big and scaling fast around the globe, this agile team is responsible for end-to-end solutions for a diverse global customer base. Centered on data-driven technologies and innovation, these services include payments-focused consulting, loyalty and marketing programs, business Test & Learn experimentation, and data-driven information and risk management services. Targeting Analytics Program Within the D&S Technology Team, the Targeting Analytics program is a relatively new program that is comprised of a rich set of products that provide accurate perspectives on Credit Risk, Portfolio Optimization, and Ad Insights. Currently, we are enhancing our customer experience with new user interfaces, moving to API-based data publishing to allow for seamless integration in other Mastercard products and externally, utilizing new data sets and algorithms to further analytic capabilities, and generating scalable big data processes. We are seeking an innovative Lead Software Engineer to lead our team in designing and building a full stack web application and data pipelines. The goal is to deliver custom analytics efficiently, leveraging machine learning and AI solutions. This individual will thrive in a fast-paced, agile environment and partner closely with other areas of the business to build and enhance solutions that drive value for our customers. Engineers work in small, flexible teams. Every team member contributes to designing, building, and testing features. The range of work you will encounter varies from building intuitive, responsive UIs to designing backend data models, architecting data flows, and beyond. There are no rigid organizational structures, and each team uses processes that work best for its members and projects. Here are a few examples of products in our space: Portfolio Optimizer (PO) is a solution that leverages Mastercard’s data assets and analytics to allow issuers to identify and increase revenue opportunities within their credit and debit portfolios. Audiences uses anonymized and aggregated transaction insights to offer targeting segments that have high likelihood to make purchases within a category to allow for more effective campaign planning and activation. Credit Risk products are a new suite of APIs and tooling to provide lenders real-time access to KPIs and insights serving thousands of clients to make smarter risk decisions using Mastercard data. Help found a new, fast-growing engineering team! Position Responsibilities As a Lead Software Engineer, you will: Lead the scoping, design and implementation of complex features Lead and push the boundaries of analytics and powerful, scalable applications Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Build and maintain analytics and data models to enable performant and scalable products Ensure a high-quality code base by writing and reviewing performant, well-tested code Mentor junior software engineers and teammates Drive innovative improvements to team development processes Partner with Product Managers and Customer Experience Designers to develop a deep understanding of users and use cases and apply that knowledge to scoping and building new modules and features Collaborate across teams with exceptional peers who are passionate about what they do Ideal Candidate Qualifications 10+ years of engineering experience in an agile production environment. Experience leading the design and implementation of complex features in full-stack applications. Proficiency with object-oriented languages, preferably Java/ Spring. Proficiency with modern front-end frameworks, preferably React with Redux, Typescript. High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi, Scoop) Fluent in the use of Git, Jenkins. Solid experience with RESTful APIs and JSON/SOAP based API Solid experience with SQL, Multi-threading, Message Queuing. Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases. Desirable Capabilities Hands on experience of cloud native development using microservices. Hands on experience on Kafka, Zookeeper. Knowledge of Security concepts and protocol in enterprise application. Expertise with automated E2E and unit testing frameworks. Knowledge of Splunk or other alerting and monitoring solutions. Core Competencies Strong technologist eager to learn new technologies and frameworks. Experience coaching and mentoring junior teammates. Customer-centric development approach Passion for analytical / quantitative problem solving Ability to identify and implement improvements to team development processes Strong collaboration skills with experience collaborating across many people, roles, and geographies Motivation, creativity, self-direction, and desire to thrive on small project teams Superior academic record with a degree in Computer Science or related technical field Strong written and verbal English communication skills #AI3 Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : AWS Architecture Good to have skills : Python (Programming Language) Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. AWS Data Architect to lead the design and implementation of scalable, cloud-native data platforms. The ideal candidate will have deep expertise in AWS data services, along with hands-on proficiency in Python and PySpark for building robust data pipelines and processing frameworks. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. - Design and implement enterprise-scale data lake and data warehouse solutions on AWS. - Lead the development of ELT/ETL pipelines using AWS Glue, EMR, Lambda, and Step Functions, with Python and PySpark. - Work closely with data engineers, analysts, and business stakeholders to define data architecture strategy. - Define and enforce data modeling, metadata, security, and governance best practices. - Create reusable architectural patterns and frameworks to streamline future development. - Provide architectural leadership for migrating legacy data systems to AWS. - Optimize performance, cost, and scalability of data processing workflows. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of programming languages such as Python or Java for data processing. - AWS Services: S3, Glue, Athena, Redshift, EMR, Lambda, IAM, Step Functions, CloudFormation or Terraform - Languages: Python ,PySpark .SQL - Big Data: Apache Spark, Hive, Delta Lake - Orchestration & DevOps: Airflow, Jenkins, Git, CI/CD pipelines - Security & Governance: AWS Lake Formation, Glue Catalog, encryption, RBAC - Visualization: Exposure to BI tools like QuickSight, Tableau, or Power BI is a plus Additional Information: - The candidate should have minimum 5 years of experience in AWS Architecture. - This position is based at our Pune office. - A 15 years full time education is required.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a skilled Data Engineer with a solid background in building and maintaining scalable data pipelines and systems. You will work closely with data analysts, engineering teams, and business stakeholders to ensure seamless data flow across platforms. Responsibilities Design, build, and optimize robust, scalable data pipelines (batch and streaming). Develop ETL/ELT processes using tools like Airflow, DBT, or custom scripts. Integrate data from various sources (e. g., APIs, S3 databases, SaaS tools). Collaborate with analytics and product teams to ensure high-quality datasets. Monitor pipeline performance and troubleshoot data quality or latency issues. Work with cloud data warehouses (e. g., Redshift, Snowflake, BigQuery). Implement data validation, error handling, and alerting for production jobs. Maintain documentation for pipelines, schemas, and data sources. Requirements 3+ years of experience in Data Engineering or similar roles. Strong in SQL and experience with data modeling and transformation. Hands-on experience with Python or Scala for scripting/data workflows. Experience working with Airflow, AWS (S3 Redshift, Lambda), or equivalent cloud tools. Knowledge of version control (Git) and CI/CD workflows. Strong problem-solving and communication skills. Good To Have Experience with DBT, Kafka, or real-time data processing. Familiarity with BI tools(e. g., Tableau, Looker, Power BI). Exposure to Docker, Kubernetes, or DevOps practices. This job was posted by Harika K from Invictus.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a skilled AWS Data Engineer to design, develop, and maintain scalable data pipelines and cloud-based data infrastructure on Amazon Web Services (AWS). The ideal candidate will work closely with data scientists, analysts, and software engineers to ensure high availability and performance of data solutions across the organization. Responsibilities Build/support applications using speech-to-text AWS services like Transcribe, Comprehend, along with Bedrock. Experience working with BI tools like QuickSight. Design, build, and manage scalable data pipelines using AWS services (e. g., Glue, Lambda, Step Functions, S3 EMR, Kinesis, Snowflake). Optimize data storage and retrieval for large-scale datasets in data lakes or data warehouses. Monitor, debug, and optimize the performance of data jobs and workflows. Ensure data quality, consistency, and security across environments. Collaborate with analytics, engineering, and business teams to understand data needs. Automate infrastructure deployment using IaC tools like CloudFormation or Terraform. Apply best practices for cloud cost optimization, data governance, and DevOps. Stay current with AWS services and recommend improvements to data architecture. Understanding machine learning pipelines and MLOps (nice to have). Requirements Bachelor's degree in computer science or a related field. 5+ years of experience as a Data Engineer, with at least 3 years focused on AWS. Strong experience with AWS services, including Transcribe, Bedrock, and QuickSight. Familiarity with Glue, S3 Snowflake, Lambda, Step Function, Kinesis, Athena, EC2/EMR, Power BI, or Tableau. Proficient in Python, PySpark, or Scala for data engineering tasks. Hands-on experience with SQL and data modeling. Familiarity with CI/CD pipelines and version control (e. g., Git, CodePipeline). Experience with orchestration tools (e. g., Airflow, Step Functions). Knowledge of data security, privacy, and compliance standards (GDPR, HIPAA, etc. ). Good To Have Skills AWS certifications (e. g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect). Experience with containerization (Docker, ECS, EKS). Experience working in Agile/Scrum environments. This job was posted by Shailendra Singh from PearlShell Softech.
Posted 1 week ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Responsibilities Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms. Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources. Build and support data pipelines for reporting, analytics, and machine learning applications. Implement and manage streaming data solutions using Kafka and other technologies. Design and optimize database schemas and data models in ClickHouse and other databases. Develop and maintain data workflows using Apache Airflow and similar orchestration tools. Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks. Collaborate with data scientists to implement ML infrastructure for model training and deployment. Ensure data quality, reliability, and security across all data platforms. Monitor data pipelines and implement proactive alerting systems. Troubleshoot and resolve data infrastructure issues. Document data flows, architectures, and processes. Stay current with industry trends and emerging technologies in data engineering. Requirements Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred). 5+ years of experience in data engineering roles. Strong expertise in AWS and/or GCP cloud platforms and services. Proficiency in building data pipelines using modern ETL/ELT tools and frameworks. Experience with stream processing technologies such as Kafka. Hands-on experience with ClickHouse or similar analytical databases. Strong programming skills in Python and experience with PySpark. Experience with workflow orchestration tools like Apache Airflow. Solid understanding of data modeling, data warehousing concepts, and dimensional modeling. Knowledge of SQL and NoSQL databases. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work in cross-functional teams. Experience in D2C, e-commerce, or retail industries. Knowledge of data visualization tools (Tableau, Looker, Power BI). Experience with real-time analytics solutions. Familiarity with CI/CD practices for data pipelines. Experience with containerization technologies (Docker, Kubernetes). Understanding of data governance and compliance requirements. Experience with MLOps or ML engineering Technologies. Cloud Platforms: AWS (S3 Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc). Data Processing: Apache Spark, PySpark, Python, SQL. Streaming: Apache Kafka, Kinesis. Data Storage: ClickHouse, S, 3 BigQuery, PostgreSQL, MongoDB. Orchestration: Apache Airflow. Version Control: Git. Containerization: Docker, Kubernetes (optional). This job was posted by Sidharth Patra from Traya Health.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We are looking for a highly skilled and hands-on Senior Data Engineer to join our growing data engineering practice in Mumbai. This role requires deep technical expertise in building and managing enterprise-grade data pipelines, with a primary focus on Amazon Redshift, AWS Glue, and data orchestration using Airflow or Step Functions. You will be responsible for building scalable, high-performance data workflows that ingest and process multi-terabyte-scale data across complex, concurrent environments. The ideal candidate is someone who thrives in solving performance bottlenecks, has led or participated in data warehouse migrations (e. g., Snowflake to Redshift), and is confident in interfacing with business stakeholders to translate requirements into robust data solutions. Responsibilities Design, develop, and maintain high-throughput ETL/ELT pipelines using AWS Glue (PySpark), orchestrated via Apache Airflow or AWS Step Functions. Own and optimize large-scale Amazon Redshift clusters and manage high concurrency workloads for a very large user base: Lead and contribute to migration projects from Snowflake or traditional RDBMS to Redshift, ensuring minimal downtime and robust validation. Integrate and normalize data from heterogeneous sources, including REST APIs, AWS Aurora (MySQL/Postgres), streaming inputs, and flat files. Implement intelligent caching strategies, leverage EC2 and serverless compute (Lambda, Glue) for custom transformations and processing at scale. Write advanced SQL for analytics, data reconciliation, and validation, demonstrating strong SQL development and tuning experience. Implement comprehensive monitoring, alerting, and logging for all data pipelines to ensure reliability, availability, and cost optimization. Collaborate directly with product managers, analysts, and client-facing teams to gather requirements and deliver insights-ready datasets. Champion data governance, security, and lineage, ensuring data is auditable and well-documented across all environments. Requirements 2-4 years of core data engineering experience, especially focused on Amazon Redshift hands-on performance tuning and large-scale management capacity. Demonstrated experience handling multi-terabyte Redshift clusters, concurrent query loads, and managing complex workload segmentation and queue priorities. Strong experience with AWS Glue (PySpark) for large-scale ETL jobs. Solid understanding and implementation experience of workflow orchestration using Apache Airflow or AWS Step Functions. Strong proficiency in Python, advanced SQL, and data modeling concepts. Familiarity with CI/CD pipelines, Git, DevOps processes, and infrastructure-as-code concepts. Experience with Amazon Athena, Lake Formation, or S3-based data lakes. Hands-on participation in Snowflake, BigQuery, or Teradata migration projects. AWS Certifications such as: AWS Certified Data Analytics - Specialty. AWS Certified Solutions Architect - Associate/Professional. Exposure to real-time streaming architectures or Lambda architectures. Soft Skills & Expectations Excellent communication skills enable able to confidently engage with both technical and non-technical stakeholders, including clients. Strong problem-solving mindset and a keen attention to performance, scalability, and reliability. Demonstrated ability to work independently, lead tasks, and take ownership of large-scale systems. Comfortable working in a fast-paced, dynamic, and client-facing environment. This job was posted by Rituza Rani from Oneture Technologies.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role - Python Developer Exp - 5 to 8 yrs Location - Bengaluru / Chennai Mode - 100% WFO NP - Immediate Joiners to Serving till 15th Aug (Need to provide supporting docs for Last working date) and bench candidates will not be considered Candidate should be available for Virtual Interview * Skills - Python Development With API Integration And Airflow Contact - Grace Call / WhatsApp - 6385810755 Email - grace.h@cortexconsultants.com
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for a skilled and experienced Data Engineer with over 5 years of experience in data engineering and data migration projects. The ideal candidate should possess strong expertise in SQL, Python, data modeling, data warehousing, and ETL pipeline development. Experience with big data tools like Hadoop and Spark, along with AWS services such as Redshift, S3, Glue, EMR, and Lambda, is essential. This role provides an excellent opportunity to work on large-scale data solutions, enabling data-driven decision-making and operational excellence. Key Responsibilities: • Design, build, and maintain scalable data pipelines and ETL processes. • Develop and optimize data models and data warehouse architectures. • Implement and manage big data technologies and cloud-based data solutions. • Perform data migration, data transformation, and integration from multiple sources. • Collaborate with data scientists, analysts, and business teams to understand data needs and deliver solutions. • Ensure data quality, consistency, and security across all data pipelines and storage systems. • Optimize performance and manage cost-efficient AWS cloud resources. Basic Qualifications: • Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent. • 5+ years of experience in Data Engineering and data migration projects. • Proficient in SQL and Python for data processing and analysis. • Strong experience in data modeling, data warehousing, and building data pipelines. • Hands-on experience with big data technologies like Hadoop, Spark, etc. • Expertise in AWS services including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM. • Understanding of ETL development best practices and principles. Preferred Qualifications: • Knowledge of data security and data privacy best practices. • Experience with DevOps and CI/CD practices related to data workflows. • Familiarity with data lake architectures and real-time data streaming. • Strong problem-solving abilities and attention to detail. • Excellent verbal and written communication skills. • Ability to work independently and in a team-oriented environment. Good to Have: • Experience with orchestration tools like Airflow or Step Functions. • Exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI. • Understanding of data governance and compliance standards.
Posted 1 week ago
6.0 - 11.0 years
6 - 10 Lacs
Hyderabad
Work from Office
About the Role In this opportunity, as Senior Data Engineer, you will: Develop and maintain data solutions using resources such as dbt, Alteryx, and Python. Design and optimize data pipelines, ensuring efficient data flow and processing. Work extensively with databases, SQL, and various data formats including JSON, XML, and CSV. Tune and optimize queries to enhance performance and reliability. Develop high-quality code in SQL, dbt, and Python, adhering to best practices. Understand and implement data automation and API integrations. Leverage AI capabilities to enhance data engineering practices. Understand integration points related to upstream and downstream requirements. Proactively manage tasks and work towards completion against tight deadlines. Analyze existing processes and offer suggestions for improvement. About You Youre a fit for the role of Senior Data Engineer if your background includes: Strong interest and knowledge in data engineering principles and methods. 6+ years of experience developing data solutions or pipelines. 6+ years of hands-on experience with databases and SQL. 2+ years of experience programming in an additional language. 2+ years of experience in query tuning and optimization. Experience working with SQL, JSON, XML, and CSV content. Understanding of data automation and API integration. Familiarity with AI capabilities and their application in data engineering. Ability to adhere to best practices for developing programmatic solutions. Strong problem-solving skills and ability to work independently. #LI-SS6 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
4.0 - 6.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisionsThe Go-To-Markets Data Analytics team is looking for a skilled Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Data Engineer, you will Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data thats both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You Youre a fit for the role of Data Engineer, if your background includes Must have at least 4-6 years of total work experience with at least 2+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. #LI-GS2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
6.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisionsThe Go-To-Markets Data Analytics team is looking for a skilled Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Senior Data Engineer, you will Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data thats both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You Youre a fit for the role of Senior Data Engineer, if your background includes Must have at least 6-7 years of total work experience with at least 3+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. #LI-GS2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
5.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Senior Machine Learning Engineer - Recommender Systems Join our team at Thomson Reuters and contribute to the global knowledge economy. Our innovative technology influences global markets and supports professionals worldwide in making pivotal decisions. Collaborate with some of the brightest minds on diverse projects to craft next-generation solutions that have a significant impact. As a leader in providing intelligent information, we value the unique perspectives that foster the advancement of our business and your professional journey. Are you excited about the opportunity to leverage your extensive technical expertise to guide a development team through the complexities of full life cycle implementation at a top-tier companyOur Commercial Engineering team is eager to welcome a skilled Senior Machine Learning Engineer to our established global engineering group. We're looking for someone enthusiastic, an independent thinker, who excels in a collaborative environment across various disciplines, and is at ease interacting with a diverse range of individuals and technological stacks. This is your chance to make a lasting impact by transforming customer interactions as we develop the next generation of an enterprise-wide experience. About the Role: As a Senior Machine Learning Engineer, you will: Spearhead the development and technical implementation of machine learning solutions, including configuration and integration, to fulfill business, product, and recommender system objectives. Create machine learning solutions that are scalable, dependable, and secure. Craft and sustain technical outputs such as design documentation and representative models. Contribute to the establishment of machine learning best practices, technical standards, model designs, and quality control, including code reviews. Provide expert oversight, guidance on implementation, and solutions for technical challenges. Collaborate with an array of stakeholders, cross-functional and product teams, business units, technical specialists, and architects to grasp the project scope, requirements, solutions, data, and services. Promote a team-focused culture that values information sharing and diverse viewpoints. Cultivate an environment of continual enhancement, learning, innovation, and deployment. About You: You are an excellent candidate for the role of Senior Machine Learning Engineer if you possess: At least 5 years of experience in addressing practical machine learning challenges, particularly with Recommender Systems, to enhance user efficiency, reliability, and consistency. A profound comprehension of data processing, machine learning infrastructure, and DevOps/MLOps practices. A minimum of 2 years of experience with cloud technologies (AWS is preferred), including services, networking, and security principles. Direct experience in machine learning and orchestration, developing intricate multi-tenant machine learning products. Proficient Python programming skills, SQL, and data modeling expertise, with DBT considered a plus. Familiarity with Spark, Airflow, PyTorch, Scikit-learn, Pandas, Keras, and other relevant ML libraries. Experience in leading and supporting engineering teams. Robust background in crafting data science and machine learning solutions. A creative, resourceful, and effective problem-solving approach. #LI-FZ1 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer Location: Hyderabad Experience: 5+Years Job Summary: We are looking for a skilled and experienced Data Engineer with over 5 years of experience in data engineering and data migration projects. The ideal candidate should possess strong expertise in SQL, Python, data modeling, data warehousing, and ETL pipeline development. Experience with big data tools like Hadoop and Spark, along with AWS services such as Redshift, S3, Glue, EMR, and Lambda, is essential. This role provides an excellent opportunity to work on large-scale data solutions, enabling data-driven decision-making and operational excellence. Key Responsibilities: • Design, build, and maintain scalable data pipelines and ETL processes. • Develop and optimize data models and data warehouse architectures. • Implement and manage big data technologies and cloud-based data solutions. • Perform data migration, data transformation, and integration from multiple sources. • Collaborate with data scientists, analysts, and business teams to understand data needs and deliver solutions. • Ensure data quality, consistency, and security across all data pipelines and storage systems. • Optimize performance and manage cost-efficient AWS cloud resources. Basic Qualifications: • Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent. • 5+ years of experience in Data Engineering and data migration projects. • Proficient in SQL and Python for data processing and analysis. • Strong experience in data modeling, data warehousing, and building data pipelines. • Hands-on experience with big data technologies like Hadoop, Spark, etc. • Expertise in AWS services including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM. • Understanding of ETL development best practices and principles. Preferred Qualifications: • Knowledge of data security and data privacy best practices. • Experience with DevOps and CI/CD practices related to data workflows. • Familiarity with data lake architectures and real-time data streaming. • Strong problem-solving abilities and attention to detail. • Excellent verbal and written communication skills. • Ability to work independently and in a team-oriented environment. Good to Have: • Experience with orchestration tools like Airflow or Step Functions. • Exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI. • Understanding of data governance and compliance standards. Why Join Us? People Tech Group has significantly grown over the past two decades, focusing on enterprise applications and IT services. We are headquartered in Bellevue, Washington, with a presence across the USA, Canada, and India. We are also expanding to the EU, ME, and APAC regions. With a strong pipeline of projects and satisfied customers, People Tech has been recognized as a Gold Certified Partner for Microsoft and Oracle. Benefits: L1 Visa opportunities to the USA after 1 year of a proven track record. Competitive wages with private healthcare cover. Incentives for certifications and educational assistance for relevant courses. Support for family with maternity leave. Complimentary daily lunch and participation in employee resource groups. For more details, please visit People Tech Group.
Posted 1 week ago
6.0 - 10.0 years
1 - 1 Lacs
Chennai
Hybrid
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Hybrid Position Description: At the client's Credit Company, we are modernizing our enterprise data warehouse in Google Cloud to enhance data, analytics, and AI/ML capabilities, improve customer experience, ensure regulatory compliance, and boost operational efficiencies. As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to the client's Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for the client's Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Skills Required: Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform - Biq Query Experience Required: GCP Data Engineer Certified Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API, cloudbuild, App Engine, Apache Kafka, Pub/Sub, AI/ML, Kubernetes Experience Preferred: In-depth understanding of GCP's underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience with AI solutions or platforms that support AI solutions Experience using data science concepts on production datasets to generate insights Experience Range: 5+ years Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 1 week ago
8.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Location: Delhi Experience: 5–8 Years Industry: Financial Services / Payments Job Summary We are looking for a skilled Data Modeler / Architect with 5–8 years of experience in designing, implementing, and optimizing robust data architectures in the financial payments industry. The ideal candidate will have deep expertise in SQL , data modeling, ETL/ELT pipeline development , and cloud-based data platforms such as Databricks or Snowflake . You will play a key role in designing scalable data models, orchestrating reliable data workflows, and ensuring the integrity and performance of mission-critical financial datasets. This is a highly collaborative role interfacing with engineering, analytics, product, and compliance teams. Key Responsibilities Design, implement, and maintain logical and physical data models to support transactional, analytical, and reporting systems. Develop and manage scalable ETL/ELT pipelines for processing large volumes of financial transaction data . Tune and optimize SQL queries, stored procedures , and data transformations for maximum performance. Build and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi . Architect data lakes and warehouses using platforms like Databricks, Snowflake, BigQuery , or Redshift . Enforce and uphold data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). Collaborate closely with data engineers, analysts, and business stakeholders to understand data needs and deliver solutions. Conduct data profiling, validation , and quality assurance to ensure clean and consistent data. Maintain clear and comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications 5–8 years of experience as a Data Modeler, Data Architect , or Senior Data Engineer in the financial/payments domain. Advanced SQL expertise, including query tuning, indexing , and performance optimization . Proficiency in developing ETL/ELT workflows using tools such as Spark, dbt, Talend, or Informatica . Experience with data orchestration frameworks: Airflow, Dagster, Luigi , etc. Strong hands-on experience with cloud-based data platforms like Databricks, Snowflake , or equivalents. Deep understanding of data warehousing principles : star/snowflake schema, slowly changing dimensions, etc. Familiarity with financial data structures , such as payment transactions, reconciliation, fraud patterns, and audit trails. Working knowledge of cloud services (AWS, GCP, or Azure) and data security best practices . Strong analytical thinking and problem-solving capabilities in high-scale environments. Preferred Qualifications Experience with real-time data pipelines (e.g., Kafka, Spark Streaming). Exposure to data mesh or data fabric architecture paradigms. Certifications in Snowflake, Databricks , or relevant cloud platforms. Knowledge of Python or Scala for data engineering tasks.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France