Jobs
Interviews

5801 Airflow Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Must have Strong Postgres DB Knowledge . Writing procedures and functions ,Writing dynamic code , Performance tuning in PostgreSQL and complex queries , UNIX. Good to have : IDMC or any other ETL tool knowledge, Airflow DAG , python , MS calls.

Posted 6 days ago

Apply

0.0 - 15.0 years

83 - 104 Lacs

Delhi, Delhi

On-site

Job Title: Data Architect (Leadership Role) Company : Wingify Location : Delhi (Outstation Candidates Allowed) Experience Required : 10 – 15 years Working Days : 5 days/week Budget : 83 Lakh to 1.04 Cr About Us We are a fast-growing product-based tech company known for its flagship product VWO—a widely adopted A/B testing platform used by over 4,000 businesses globally, including Target, Disney, Sears, and Tinkoff Bank. The team is self-organizing, highly creative, and passionate about data, tech, and continuous innovation. About us Company Size: Mid-Sized Industry : Consumer Internet, Technology, Consulting Role & Responsibilities Lead and mentor a team of Data Engineers, ensuring performance and career development. Architect scalable and reliable data infrastructure with high availability. Define and implement data governance frameworks, compliance, and best practices. Collaborate cross-functionally to execute the organization’s data roadmap. Optimize data processing workflows for scalability and cost efficiency. Ensure data quality, privacy, and security across platforms. Drive innovation and technical excellence across the data engineering function. Ideal Candidate Must-Haves Experience : 10+ years in software/data engineering roles. At least 2–3+ years in a leadership role managing teams of 5+ Data Engineers. Proven hands-on experience setting up data engineering systems from scratch (0 → 1 stage) in high-growth B2B product companies. Technical Expertise: Strong in Java (preferred), or Python, Node.js, GoLang. Expertise in big data tools: Apache Spark, Kafka, Hadoop, Hive, Airflow, Presto, HDFS. Strong design experience in High-Level Design (HLD) and Low-Level Design (LLD). Backend frameworks like Spring Boot, Google Guice. Cloud data platforms: AWS, GCP, Azure. Familiarity with data warehousing: Snowflake, Redshift, BigQuery. Databases: Redis, Cassandra, MongoDB, TiDB. DevOps tools: Jenkins, Docker, Kubernetes, Ansible, Chef, Grafana, ELK. Other Skills: Strong understanding of data governance, security, and compliance (GDPR, SOC2, etc.). Proven strategic thinking with ability to align technical architecture to business objectives. Excellent communication, leadership, and stakeholder management. Preferred Qualifications Exposure to Machine Learning infrastructure / MLOps. Experience with real-time data analytics. Strong foundation in algorithms, data structures, and scalable systems. Previous work in SaaS or high-growth startups. Screening Questions Do you have team leadership experience? How many engineers have you led? Have you built a data engineering platform from scratch? Describe the setup. What’s the largest data scale you’ve worked with and where? Are you open to continuing hands-on coding in this role? Interested candidates applies on deepak.visko@gmail.com or 9238142824 . Job Types: Full-time, Permanent Pay: ₹8,300,000.00 - ₹10,400,000.00 per year Work Location: In person

Posted 6 days ago

Apply

7.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Chennai) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Agile, Program Management, data infrastructure Forbes Advisor is Looking for: Program Manager – Data Job Description Forbes Advisor is a high-growth digital media and technology company that empowers consumers to make confident decisions about money, health, careers, and everyday life. Our global data organisation builds modern, AI-augmented pipelines that turn information into revenue-driving insight. Job Description: We’re hiring a Program Manager to orchestrate complex, cross-functional data initiatives—from revenue-pipeline automation to analytics product launches. You’ll be the connective tissue between Data Engineering, Analytics, RevOps, Product, and external partners, ensuring programs land on time, on scope, and with measurable impact. If you excel at turning vision into executable roadmaps, mitigating risk before it bites, and communicating clearly across technical and business audiences, we’d love to meet you. Key Responsibilities: Own program delivery for multi-team data products (e.g., revenue-data pipelines, attribution models, partner-facing reporting APIs). Build and maintain integrated roadmaps, aligning sprint plans, funding, and resource commitments. Drive agile ceremonies (backlog grooming, sprint planning, retrospectives) and track velocity, burn-down, and cycle-time metrics. Create transparent status reporting—risks, dependencies, OKRs—tailored for engineers up to C-suite stakeholders. Proactively remove blockers by coordinating with Platform, IT, Legal/Compliance, and external vendors. Champion process optimisation: intake, prioritisation, change management, and post-mortems. Partner with RevOps and Media teams to ensure program outputs translate into revenue growth and faster decision making. Facilitate launch readiness—QA checklists, enablement materials, go-live runbooks—so new data products land smoothly. Foster a culture of documentation, psychological safety, and continuous improvement within the data organisation. Experience required: 7+ years program or project-management experience in data, analytics, SaaS, or high-growth tech. Proven success delivering complex, multi-stakeholder initiatives on aggressive timelines. Expertise with agile frameworks (Scrum/Kanban) and modern collaboration tools (Jira, Asana, Notion/Confluence, Slack). Strong understanding of data & cloud concepts (pipelines, ETL/ELT, BigQuery, dbt, Airflow/Composer). Excellent written and verbal communication—able to translate between technical teams and business leaders. Risk-management mindset: identify, quantify, and drive mitigation before issues escalate. Experience coordinating across time zones and cultures in a remote-first environment. Nice to Have Formal certification (PMP, PMI-ACP, CSM, SAFe, or equivalent). Familiarity with GCP services, Looker/Tableau, or marketing-data stacks (Google Ads, Meta, GA4). Exposure to revenue operations, performance marketing, or subscription/affiliate business models. Background in change-management or process-improvement methodologies (Lean, Six Sigma). Perks: Monthly long weekends—every third Friday off. Fitness and commute reimbursement. Remote-first culture with flexible hours and a high-trust environment. Opportunity to shape a world-class data platform inside a trusted global brand. Collaborate with talented engineers, analysts, and product leaders who value innovation and impact. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Chennai) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Agile, Program Management, data infrastructure Forbes Advisor is Looking for: Program Manager – Data Job Description Forbes Advisor is a high-growth digital media and technology company that empowers consumers to make confident decisions about money, health, careers, and everyday life. Our global data organisation builds modern, AI-augmented pipelines that turn information into revenue-driving insight. Job Description: We’re hiring a Program Manager to orchestrate complex, cross-functional data initiatives—from revenue-pipeline automation to analytics product launches. You’ll be the connective tissue between Data Engineering, Analytics, RevOps, Product, and external partners, ensuring programs land on time, on scope, and with measurable impact. If you excel at turning vision into executable roadmaps, mitigating risk before it bites, and communicating clearly across technical and business audiences, we’d love to meet you. Key Responsibilities: Own program delivery for multi-team data products (e.g., revenue-data pipelines, attribution models, partner-facing reporting APIs). Build and maintain integrated roadmaps, aligning sprint plans, funding, and resource commitments. Drive agile ceremonies (backlog grooming, sprint planning, retrospectives) and track velocity, burn-down, and cycle-time metrics. Create transparent status reporting—risks, dependencies, OKRs—tailored for engineers up to C-suite stakeholders. Proactively remove blockers by coordinating with Platform, IT, Legal/Compliance, and external vendors. Champion process optimisation: intake, prioritisation, change management, and post-mortems. Partner with RevOps and Media teams to ensure program outputs translate into revenue growth and faster decision making. Facilitate launch readiness—QA checklists, enablement materials, go-live runbooks—so new data products land smoothly. Foster a culture of documentation, psychological safety, and continuous improvement within the data organisation. Experience required: 7+ years program or project-management experience in data, analytics, SaaS, or high-growth tech. Proven success delivering complex, multi-stakeholder initiatives on aggressive timelines. Expertise with agile frameworks (Scrum/Kanban) and modern collaboration tools (Jira, Asana, Notion/Confluence, Slack). Strong understanding of data & cloud concepts (pipelines, ETL/ELT, BigQuery, dbt, Airflow/Composer). Excellent written and verbal communication—able to translate between technical teams and business leaders. Risk-management mindset: identify, quantify, and drive mitigation before issues escalate. Experience coordinating across time zones and cultures in a remote-first environment. Nice to Have Formal certification (PMP, PMI-ACP, CSM, SAFe, or equivalent). Familiarity with GCP services, Looker/Tableau, or marketing-data stacks (Google Ads, Meta, GA4). Exposure to revenue operations, performance marketing, or subscription/affiliate business models. Background in change-management or process-improvement methodologies (Lean, Six Sigma). Perks: Monthly long weekends—every third Friday off. Fitness and commute reimbursement. Remote-first culture with flexible hours and a high-trust environment. Opportunity to shape a world-class data platform inside a trusted global brand. Collaborate with talented engineers, analysts, and product leaders who value innovation and impact. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Are you ready to join a world leader in the exciting and dynamic fields of the Pharmaceutical and Medical Device industries? PQE Group has been at the forefront of these industries since 1998, with 40 subsidiaries and more than 2000 employees in Europe, Asia, and the Americas. Due to our constant growth, we are currently looking for a Data Scientist to support our projects in Hyderabad, India or Chandigarh, India . What you’ll do: · Design, train, and deploy Machine Learning and NLP models (NER, classification, embeddings) using Large Language Models (LLMs) and BERT-like architectures. · Implement Retrieval-Augmented Generation (RAG) pipelines and LLM-based AI agents for real-time use cases. · Build interactive dashboards with Streamlit and lightweight web interfaces (HTML/CSS/JavaScript, Laravel or similar frameworks) to visualise insights and create rapid prototypes. · Define, manage, and optimise relational databases (e.g. MySQL, PostgreSQL) and vector databases (e.g. Pinecone, Weaviate, FAISS) for efficient embedding retrieval. · Collaborate with Product and DevOps teams on integration and deployment. · Document code, service prototypes, and architectural decisions clearly and for future reuse. Must-have requirements: · 3+ years of professional experience in Data Science, Machine Learning, or AI development. · Strong command of Python and key libraries (pandas, scikit-learn, PyTorch/TensorFlow, LangChain or similar). · Hands-on knowledge of LLMs (fine-tuning, prompt engineering, evaluation). · Experience with RAG systems and/or AI agents (e.g. ReAct, Auto-GPT, CrewAI). · Experience managing relational databases and designing vector databases . · Deep understanding of NLP and NER models (BERT, RoBERTa, spaCy, Hugging Face Transformers). · Experience with workflow orchestrators ( Airflow, Prefect ) or MLOps infrastructures. · Familiarity with containerisation (Docker) and CI/CD pipelines . · Solid statistical foundation and background in probabilistic models . · Proficiency with Streamlit for rapid prototyping of data-driven apps. · Front-end development skills (HTML/CSS/JavaScript; familiarity with React/Vite or similar is appreciated). · Comfortable with Git version control and working in Agile teams. · Fluent written and spoken English . Next Steps Upon receiving your application, if a match is found, the Recruiting department will contact you for an initial Talent Acquisition interview. If there's a positive match, a technical interview with the Hiring Manager will be arranged. In the case of a positive feedback coming from the Hiring Manager interview, the recruiter will contact you for the next steps or to discuss our proposal. Alternatively, if the feedback is negative, we will contact you to halt the recruitment process. Working at PQE Group As a member of the PQE team, you will be part of a challenging, multicultural company that values collaboration and innovation. PQE Group gives you the opportunity to work on international projects, improve your skills and interact with colleagues from all corners of the world. If you are looking for a rewarding and exciting career, PQE Group is the perfect place for you. Apply now and take the first step towards an amazing future with us.

Posted 6 days ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn’t a buzzword — it’s a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era? You’re in the right place! Agentforce is the future of AI, and you are the future of Salesforce. As an engineering leader, you will focus on developing the team around you. Bring your technical chops to drive your teams to success around feature delivery and live-site management for a complex cloud infrastructure service. You are as enthusiastic about recruiting and building a team as you are about challenging technical problems that your team will solve. You will also help shape, direct and execute our product vision. You’ll be challenged to blend customer-centric principles, industry-changing innovation, and the reliable delivery of new technologies. You will work directly with engineering, product, and design, to create experiences that reinforce the Salesforce brand by delighting and wowing our customers with highly reliable and available services. Responsibilities Drive the vision of enabling a full suite of Salesforce applications on Google Cloud in collaboration with teams across geographies. Build and lead a team of engineers to deliver cloud framweoks, infrastructure automation tools, workflows, and validation platforms on our public cloud platforms. Solid experience in building and evolving large scale distributed systems to reliably process billions of data points Proactively identify reliability & data quality problems and drive triaging and remediation process. Invest in continuous employee development of a highly technical team by mentoring and coaching engineers and technical leads in the team. Recruit and attract top talent. Drive execution and delivery by collaborating with cross functional teams, architects, product owners and engineers. Experience managing 2+ engineering teams. Experience building services on public cloud platforms like GCP, AWS, Azure Required Skills/Experiences B.S/M.S. in Computer Sciences or equivalent field. 12+ years of relevant experience in software development teams with 5+ years of experience managing teams Passionate, curious, creative, self-starter and approach problems with right methodology and intelligent decisions. Laser focus on impact, balancing effort to value, and getting things done. Experience providing mentorship, technical leadership, and guidance to team members. Strong customer service orientation and a desire to help others succeed. Top notch written and oral communication skills. Desired Skills/Experiences Working knowledge of modern technologies/services on public cloud is desirable Experience with container orchestration systems Kubernetes, Docker, Helios, Fleet Expertise in open source technologies like Elastic Search, Logstash, Kakfa, MongoDB, Hadoop, Spark, Trino/Presto, Hive, Airflow, Splunk Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Unleash Your Potential When you join Salesforce, you’ll be limitless in all areas of your life. Our benefits and resources support you to find balance and be your best , and our AI agents accelerate your impact so you can do your best . Together, we’ll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future — but to redefine what’s possible — for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Hi Connections, Urgent - Hiring for below role About the Role: We are seeking a seasoned and highly skilled MLOps Engineer to join our growing team. The ideal candidate will have extensive hands-on experience with deploying, monitoring, and retraining machine learning models in production environments. You will be responsible for building and maintaining robust and scalable MLOps pipelines using tools like MLflow, Apache Airflow, Kubernetes, and Databricks or Azure ML. A strong understanding of infrastructure-as-code using Terraform is essential. You will play a key role in operationalizing AI/ML systems and ensuring high performance, availability, and automation across the ML lifecycle. --- Key Responsibilities: · Design and implement scalable MLOps pipelines for model training, validation, deployment, and monitoring. · Operationalize machine learning models using MLflow, Airflow, and containerized deployments via Kubernetes. · Automate and manage ML workflows across cloud platforms such as Azure ML or Databricks. · Develop infrastructure using Terraform for consistent and repeatable deployments. · Trace API calls to LLMs, Azure OCR and Paradigm · Implement performance monitoring, alerting, and logging for deployed models using custom and third-party tools. · Automate model retraining and continuous deployment pipelines based on data drift and model performance metrics. · Ensure traceability, reproducibility, and auditability of ML experiments and deployments. · Collaborate with Data Scientists, ML Engineers, and DevOps teams to streamline ML workflows. · Apply CI/CD practices and version control to the entire ML lifecycle. · Ensure secure, reliable, and compliant deployment of models in production environments. --- Required Qualifications: · 5+ years of experience in MLOps, DevOps, or ML engineering roles, with a focus on production ML systems. · Proven experience deploying machine learning models using MLflow and workflow orchestration with Apache Airflow. · Hands-on experience with Kubernetes for container orchestration in ML deployments. · Proficiency with Databricks and/or Azure ML, including model training and deployment capabilities. · Solid understanding and practical experience with Terraform for infrastructure-as-code. · Experience automating model monitoring and retraining processes based on data and model drift. · Knowledge of CI/CD tools and principles applied to ML systems. · Familiarity with monitoring tools and observability stacks (e.g., Prometheus, Grafana, Azure Monitor). · Strong scripting skills in Python · Deep understanding of ML lifecycle challenges including model versioning, rollback, and scaling. · Excellent communication skills and ability to collaborate across technical and non-technical teams. --- Nice to Have: · Experience with Azure DevOps or GitHub Actions for ML CI/CD. · Exposure to model performance optimization and A/B testing in production environments. · Familiarity with feature stores and online inference frameworks. · Knowledge of data governance and ML compliance frameworks. · Experience with ML libraries like scikit-learn, PyTorch, or TensorFlow. --- Education: · Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within PWC Responsibilities Job Title: Cloud Engineer (Java 17+, Spring Boot, Microservices, AWS) Job Type: Full-Time Job Overview: As a Cloud Engineer, you will be responsible for developing, deploying, and managing cloud-based applications and services on AWS. You will use your expertise in Java 17+, Spring Boot, and Microservices to build robust and scalable cloud solutions. This role will involve working closely with development teams to ensure seamless cloud integration, optimizing cloud resources, and leveraging AWS tools to ensure high availability, security, and performance. Key Responsibilities: Cloud Infrastructure: Design, build, and deploy cloud-native applications on AWS, utilizing services such as EC2, S3, Lambda, RDS, EKS, API Gateway, and CloudFormation. Backend Development: Develop and maintain backend services and microservices using Java 17+ and Spring Boot, ensuring they are optimized for the cloud environment. Microservices Architecture: Architect and implement microservices-based solutions that are scalable, secure, and resilient, ensuring they align with AWS best practices. CI/CD Pipelines: Set up and manage automated CI/CD pipelines using tools like Jenkins, GitLab CI, or AWS CodePipeline for continuous integration and deployment. AWS Services Integration: Integrate AWS services such as DynamoDB, SQS, SNS, CloudWatch, and Elastic Load Balancing into microservices to improve performance and scalability. Performance Optimization: Monitor and optimize the performance of cloud infrastructure and services, ensuring efficient resource utilization and cost management in AWS. Security: Implement security best practices in cloud applications and services, including IAM roles, VPC configuration, encryption, and authentication mechanisms. Troubleshooting & Support: Provide ongoing support and troubleshooting for cloud-based applications, ensuring uptime, availability, and optimal performance. Collaboration: Work closely with cross-functional teams, including frontend developers, system administrators, and DevOps engineers, to ensure end-to-end solution delivery. Documentation: Document the architecture, implementation, and operations of cloud infrastructure and applications to ensure knowledge sharing and compliance. Required Skills & Qualifications: Strong experience with Java 17+ (latest version) and Spring Boot for backend development. Hands-on experience with AWS Cloud services such as EC2, S3, Lambda, RDS, EKS, API Gateway, DynamoDB, SQS, SNS, and CloudWatch. Proven experience in designing and implementing microservices architectures. Solid understanding of cloud security practices, including IAM, VPC, encryption, and secure cloud-native application development. Experience with CI/CD tools and practices (e.g., Jenkins, GitLab CI, AWS CodePipeline). Familiarity with containerization technologies like Docker, and orchestration tools like Kubernetes. Ability to optimize cloud applications for performance, scalability, and cost-efficiency. Experience with monitoring and logging tools like CloudWatch, ELK Stack, or other AWS-native tools. Knowledge of RESTful APIs and API Gateway for exposing microservices. Solid understanding of version control systems like Git and familiarity with Agile methodologies. Strong problem-solving and troubleshooting skills, with the ability to work in a fast-paced environment. Preferred Skills: AWS certifications, such as AWS Certified Solutions Architect or AWS Certified Developer. Experience with Terraform or AWS CloudFormation for infrastructure as code. Familiarity with Kubernetes and EKS for container orchestration in the cloud. Experience with serverless architectures using AWS Lambda. Knowledge of message queues (e.g., SQS, Kafka) and event-driven architectures. Education & Experience: Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent practical experience. 7-11 years of experience in software development with a focus on AWS cloud and microservices. Mandatory Skill Sets Cloud Engineer (Java+Springboot+ AWS) Preferred Skill Sets Cloud Engineer (Java+Springboot+ AWS) Years Of Experience Required 7-11 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Cloud Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 6 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Solution Architect (India) Work Mode: Remote/ Hybrid Required exp: 10+ years Shift timing: Minimum 4 hours overlap required with US time Role Summary: The Solution Architect is responsible for designing robust, scalable, and high- performance AI and data-driven systems that align with enterprise goals. This role serves as a critical technical leader—bridging AI/ML, data engineering, ETL, cloud architecture, and application development. The ideal candidate will have deep experience across traditional and generative AI, including Retrieval- Augmented Generation (RAG) and agentic AI systems, along with strong fundamentals in data science, modern cloud platforms, and full-stack integration. Key Responsibilities:  Design and own the end-to-end architecture of intelligent systems including data ingestion (ETL/ELT), transformation, storage, modeling, inferencing, and reporting.  Architect GenAI-powered applications using LLMs, vector databases, and RAG pipelines; Agentic Workflow, integrate with enterprise knowledge graphs and document repositories.  Lead the design and deployment of agentic AI systems that can plan, reason, and interact autonomously within business workflows.  Collaborate with cross-functional teams including data scientists, data engineers, MLOps, and frontend/backend developers to deliver scalable and maintainable solutions.  Define patterns and best practices for traditional ML and GenAI projects, covering model governance, explainability, reusability, and lifecycle management.  Ensure seamless integration of ML/AI systems via RESTful APIs with frontend interfaces (e.g., dashboards, portals) and backend systems (e.g., CRMs, ERPs).  Architect multi-cloud or hybrid cloud AI solutions, leveraging services from AWS, Azure, or GCP for scalable compute, storage, orchestration, and deployment.  Provide technical oversight for data pipelines (batch and real-time), data lakes, and ETL frameworks ensuring secure and governed data movement.  Conduct architecture reviews, mentor engineering teams, and drive design standards for AI/ML, data engineering, and software integration. Qualifications :  Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.  10+ years of experience in software architecture, including at least 4 years in AI/ML-focused roles. Required Skills:  Expertise in machine learning (regression, classification, clustering), deep learning (CNNs, RNNs, transformers), and NLP.  Experience with Generative AI frameworks and services (e.g., OpenAI, LangChain, Azure OpenAI, Amazon Bedrock).  Strong hands-on Python skills, with experience in libraries such as Scikit-learn, Pandas, NumPy, TensorFlow, or PyTorch.  Proficiency in RESTful API development and integration with frontend components (React, Angular, or similar is a plus).  Deep experience in ETL/ELT processes using tools like Apache Airflow, Azure Data Factory, or AWS Glue.  Strong knowledge of cloud-native architecture and AI/ML services on either one of the cloud AWS, Azure, or GCP.  Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) and semantic search patterns. Experience in deploying and managing ML models with MLOps frameworks (MLflow, Kubeflow).  Understanding of microservices architecture, API gateways, and container orchestration (Docker, Kubernetes).  Having forntend exp is good to have.

Posted 6 days ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Description: Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure Cloud Infrastructure offerings. Strong experience in common data warehouse modeling principles including Kimball. Working knowledge of Python is desirable Experience developing security models. Databricks & Azure Big Data Architecture Certification would be plus Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 4-8 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 6 days ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 4 to 7 years ( 3 + years relevant) Education Qualification Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 6 days ago

Apply

10.0 years

0 Lacs

Chandigarh, India

On-site

Job Description: 7–10 years of industry experience, with at least 5 years in machine learning roles. Advanced proficiency in Python and common ML libraries: TensorFlow, PyTorch, Scikit-learn. Experience with distributed training, model optimization (quantization, pruning), and inference at scale. Hands-on experience with cloud ML platforms: AWS (SageMaker), GCP (Vertex AI), or Azure ML. Familiarity with MLOps tooling: MLflow, TFX, Airflow, or Kubeflow; and data engineering frameworks like Spark, dbt, or Apache Beam. Strong grasp of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay). Excellent problem-solving, communication, and documentation skills.

Posted 6 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Job Description: Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. You’ll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Mandatory Skill Sets Databricks Preferred Skill Sets Databricks Years Of Experience Required 7-14 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Bachelor of Technology, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date August 11, 2025

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

Remote

Job Title: AI Engineer – Web Crawling & Field Data Extraction Location: [Remote] Department: Engineering / Data Science Experience Level: Mid to Senior Employment Type: Contract to Hire About the Role: We are looking for a skilled AI Engineer with strong experience in web crawling, data parsing, and AI/ML-driven information extraction to join our team. You will be responsible for developing systems that automatically crawl websites, extract structured and unstructured data, and intelligently map the extracted content to predefined fields for business use. This role combines practical web scraping, NLP techniques, and AI model integration to automate workflows that involve large-scale content ingestion. Key Responsibilities: Design and develop automated web crawlers and scrapers to extract information from various websites and online resources. Implement robust and scalable data extraction pipelines that convert semi-structured/unstructured data into structured field-level data. Use Natural Language Processing (NLP) and ML models to intelligently interpret and map extracted content to specific form fields or schemas. Build systems that can handle dynamic web content, captchas, JavaScript-rendered pages, and anti-bot mechanisms. Collaborate with frontend/backend teams to integrate extracted data into user-facing applications. Monitor crawler performance, ensure compliance with legal/data policies, and manage scheduling, deduplication, and logging. Optimize crawling strategies using AI/heuristics for prioritization, entity recognition, and data validation. Create tools for auto-filling forms or generating structured records from crawled data. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or related field. 3+ years of hands-on experience with web scraping frameworks (e.g., Scrapy, Puppeteer, Playwright, Selenium). Proficiency in Python, with experience in BeautifulSoup, lxml, requests, aiohttp, or similar libraries. Experience with NLP libraries (e.g., spaCy, NLTK, Hugging Face Transformers) to parse and map extracted data. Familiarity with ML-based data classification, extraction, and field mapping. Knowledge of structured data formats (JSON, XML, CSV) and RESTful APIs. Experience handling anti-scraping techniques and rate-limiting controls. Strong problem-solving skills, clean coding practices, and the ability to work independently. Nice-to-Have Experience with AI form understanding (e.g., LayoutLM, DocAI, OCR). Familiarity with Large Language Models (LLMs) for intelligent data labeling or validation. Exposure to data pipelines, ETL frameworks, or orchestration tools (Airflow, Prefect). Understanding of data privacy, compliance, and ethical crawling standards. Why Join Us? Work on cutting-edge AI applications in real-world automation. Be part of a fast-growing and collaborative team. Opportunity to lead and shape intelligent data ingestion solutions from the ground up.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the Team: The Data Foundations team plays a critical role in supporting Roku Ads business intelligence and analytics . The team is responsible for developing and managing foundational datasets designed to serve the operational and analytical needs of the broader organization. The team's mission is carried out through three focus areas: acting as the interface between data producers and consumers, simplifying data architecture, and creating tools in a standardized way . About the Role: We are seeking a talented and experienced Senior Software Engineer with a strong background in big data technologies, including Apache Spark and Apache Airflow. This hybrid role bridges software and data engineering, requiring expertise in designing, building, and maintaining scalable systems for both application development and data processing. You will collaborate with cross-functional teams to design and manage robust, production-grade, large-scale data systems. The ideal candidate is a proactive self-starter with a deep understanding of high-scale data services and a commitment to excellence. What you’ll be doing Software Development: Write clean, maintainable, and efficient code, ensuring adherence to best practices through code reviews. Big Data Engineering: Design, develop, and maintain data pipelines and ETL workflows using Apache Spark, Apache Airflow. Optimize data storage, retrieval, and processing systems to ensure reliability, scalability, and performance. Develop and fine-tune complex queries and data processing jobs for large-scale datasets. Monitor, troubleshoot, and improve data systems for minimal downtime and maximum efficiency. Collaboration & Mentorship: Partner with data scientists, software engineers, and other teams to deliver integrated, high-quality solutions. Provide technical guidance and mentorship to junior engineers, promoting best practices in data engineering. We’re excited if you have Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience in software and/or data engineering with expertise in big data technologies such as Apache Spark, Apache Airflow and Trino. Strong understanding of SOLID principles and distributed systems architecture. Proven experience in distributed data processing, data warehousing, and real-time data pipelines. Advanced SQL skills, with expertise in query optimization for large datasets. Exceptional problem-solving abilities and the capacity to work independently or collaboratively. Excellent verbal and written communication skills. Experience with cloud platforms such as AWS, GCP, or Azure, and containerization tools like Docker and Kubernetes. (preferred) Familiarity with additional big data technologies, including Hadoop, Kafka, and Presto. (preferred) Strong programming skills in Python, Java, or Scala. (preferred) Knowledge of CI/CD pipelines, DevOps practices, and infrastructure-as-code tools (e.g., Terraform). (preferred) Expertise in data modeling, schema design, and data visualization tools. (preferred) AI literacy and curiosity.You have either tried Gen AI in your previous work or outside of work or are curious about Gen AI and have explored it. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.

Posted 6 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Name: IT Consulting. Location: Shollingnallur, Chennai Designation: Product Manager Consultant/Expert (WORK IN OFFICE ONLY - 5 days working ) Working Days: Mon. To Fri. Working Time: 10am to 7pm Interview Process: 1> Hacker Rank Test 2> Technical Interview(Internal) 3> Client Interview Experience: 8+ years CTC: 22 LPA to 23 LPA ONLY (WORK IN OFFICE) (ctc depends on Candidates Current CTC / Hike as per Market standard %wise / skillsets ) ON COMPANY’S PAYROLL About Us. We're a boutique IT Systems Integration and consulting firm, making a significant impact on businesses worldwide since 1998. Our headquarters is in Michigan, USA, and we've established a Center of Excellence in Bangalore, India, with an additional office in Ontario, Canada. Our influence spans across more than 40 countries, where we're making a difference. We have a strong focus on the Automotive sectors, and our reach extends into critical industries like Manufacturing, Healthcare, Utilities, and Higher Education. What sets us apart are our core competencies, including ERP implementation, Cloud Services, Application Development, and Staff Augmentation. We support some of our clients with expert RPO models, and FORD is one such major client of ours where we are their No. 1 IT vendor in the USA region, expanding our expertise in the India region. we are certainly one of their preferred suppliers / vendors. We'd love to explore how your skills could align with our exciting journey, particularly with FORD in Chennai. you will be deputed at FORD, Sholinganalur. Primary Skills: We're seeking a detail-oriented, technically-minded Product Manager. Python, JIRA,GCP, GCP Cloudrun, Angular, AIRFLOW, Big Query, Terraform LLM, Cycode, Dynatrace, Checkmarx, Fossa. Clients Skills Required: We're seeking a detail-oriented, technically-minded Product Manager. Python, JIRA,GCP, GCP Cloudrun, Angular, AIRFLOW, Big Query, Terraform LLM, Cycode, Dynatrace, Checkmarx, Fossa.

Posted 6 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory Skill Sets AWS Data Engineer Preferred Skill Sets AWS Data Engineer Years Of Experience Required 4-8 Education Qualification B.tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Development, Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 6 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Utilizing expertise in Power Apps, Power Pages, Power Automate, and Power Virtual Agent development. Designing and creating custom business apps, such as Canvas Apps, SharePoint Form Apps, Model Driven Apps, and Portals/Power Pages Portal. Implementing various Power Automate Flows, including Automated, Instant, Business Process Flow, and UI Flows. Collaborating with backend teams to integrate Power Platform solutions with SQL Server and SPO. Demonstrating strong knowledge of Dataverse, including security and permission levels. Developing and utilizing custom connectors in Power Platform solutions. Creating and consuming functions/API's to retrieve/update data from the database. Managing managed solutions to ensure seamless deployment and version control. Experience in Azure DevOps CI/CD deployment Pipelines. Monitoring and troubleshooting any performance bottlenecks. Having any coding/programming experience is a plus. Excellent communication skills. Requirements 6-9 years of relevant experience. Strong hands-on work experience with Power Pages and Model Driven Apps with Dataverse. Experience in Azure DevOps CI/CD deployment Pipelines. Good communication skills. Mandatory Skill Sets Strong hands-on work experience with Power Pages and Model Driven Apps with Dataverse. Preferred Skill Sets Experience in Azure DevOps CI/CD deployment Pipelines. Years Of Experience Required 5 years to 9 years Education Qualification Bachelor's degree in Computer Science, Engineering, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Power Apps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 6 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Exciting Opportunity at Eloelo: Join the Future of Live Streaming and Social Gaming! Are you ready to be a part of the dynamic world of live streaming and social gaming? Look no further! Eloelo, an innovative Indian platform founded in February 2020 by ex-Flipkart executives Akshay Dubey and Saurabh Pandey, is on the lookout for passionate individuals to join our growing team in Bangalore. About Us: Eloelo stands at the forefront of multi-host video and audio rooms, offering a unique blend of interactive experiences, including chat rooms, PK challenges, audio rooms, and captivating live games like Lucky 7, Tambola, Tol Mol Ke Bol, and Chidiya Udd. Our platform has successfully attracted audiences from all corners of India, providing a space for social connections and immersive gaming. Recent Milestone: In pursuit of excellence, Eloelo has secured a significant milestone by raising $22Mn in the month of October 2023 from a diverse group of investors, including Lumikai, Waterbridge Capital, Courtside Ventures, Griffin Gaming Partners, and other esteemed new and existing contributors. Why Eloelo? Be a part of a team that thrives on creativity and innovation in the live streaming and social gaming space. Rub shoulders with the stars! Eloelo regularly hosts celebrities such as Akash Chopra, Kartik Aryan, Rahul Dua, Urfi Javed, and Kiku Sharda from the Kapil Sharma Show and that's our level of celebrity collaboration. Working with a world class team ,high performance team that constantly pushes boundaries and limits , redefines what is possible Fun and work at the same place with amazing work culture , flexible timings , and vibrant atmosphere We are looking to hire a business analyst to join our growth analytics team. This role sits at the intersection of business strategy, marketing performance, creative experimentation, and customer lifecycle management, with a growing focus on AI-led insights. You’ll drive actionable insights to guide our performance marketing, creative strategy, and lifecycle interventions, while also building scalable analytics foundations for a fast-moving growth team. About the Role: We are looking for a highly skilled and creative Data Scientist to join our growing team and help drive data-informed decisions across our entertainment platforms. You will leverage advanced analytics, machine learning, and predictive modeling to unlock insights about our audience, content performance, and product engagement—ultimately shaping the way millions of people experience entertainment. Key Responsibilities: Develop and deploy machine learning models to solve key business problems (e.g., personalization, recommendation systems, churn prediction). Analyze large, complex datasets to uncover trends in content consumption, viewer preferences, and engagement behaviors. Partner with product, marketing, engineering, and content teams to translate data insights into actionable strategies. Design and execute A/B and multivariate experiments to evaluate the impact of new features and campaigns. Build dashboards and visualizations to monitor key metrics and provide stakeholders with self-service analytics tools. Collaborate on the development of audience segmentation, lifetime value modeling, and predictive analytics. Stay current with emerging technologies and industry trends in data science and entertainment. Qualifications: Master’s or PhD in Computer Science, Statistics, Mathematics, Data Science, or related field. 1+ years of experience as a Data Scientist, ideally within media, streaming, gaming, or entertainment tech. Proficiency in programming languages such as Python or R. Strong SQL skills and experience working with large-scale datasets and data warehousing tools (e.g., Snowflake, BigQuery, Redshift). Experience with machine learning libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Solid understanding of experimental design and statistical analysis techniques. Ability to clearly communicate complex technical findings to non-technical stakeholders. Preferred Qualifications: Experience building recommendation engines, content-ranking algorithms, or personalization models in an entertainment context. Familiarity with user analytics tools such as Mixpanel, Amplitude, or Google Analytics. Prior experience with data pipeline and workflow tools (e.g., Airflow, dbt). Background in natural language processing (NLP), computer vision, or audio analysis is a plus. Why Join Us: Shape the future of how audiences engage with entertainment through data-driven storytelling. Work with cutting-edge technology on high-impact, high-visibility projects. Join a collaborative team in a dynamic and fast-paced environment where creativity meets data science.

Posted 6 days ago

Apply

30.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About REA Group: In 1995, in a garage in Melbourne, Australia, REA Group was born from a simple question: “Can we change the way the world experiences property?” Could we? Yes. Are we done? Never. Fast forward 30 years, REA Group is a market leader in online real estate in three continents and continuing to grow rapidly across the globe. The secret to our growth is staying true to that ‘day one’ mindset; the hunger to innovate, the ambition to change the world, and the curiosity to reimagine the future. Our new Tech Center in Cyber City is dedicated to accelerating REA Group’s global technology delivery through relentless innovation. We’re looking for the best technologists, inventors and leaders in India to join us on this exciting new journey. If you’re excited by the prospect of creating something magical from scratch, then read on. Senior Engineer – Consumer Audience Data What the role is all about: We are looking for a Senior Engineer with 3-5 years of experience to join the Audience Data team within the Personalization & Privacy division of our Consumer Group. This team is tasked with managing a significant data asset comprising terabytes of user interactions from our website and apps. This data is crucial for our personalization efforts, machine learning, product insights and analytics, as well as customer reporting functions. You will be joining a group of wonderful people and function in a distributed team and solve complex problems that have far reaching benefits to the business, facilitating in-depth analysis and understanding of user behavior, supporting data-driven decision-making across our business. While no two days are likely to be the same, your typical responsibilities will include: Contribute to the design and architecture of scalable, efficient, and secure data solutions, considering long-term scalability and maintainability. Contribute to the adoption of best practices, coding standards, and engineering principles across the team to ensure a high-quality and maintainable codebase. Conduct performance analysis, optimization, and tuning of data processing workflows and systems to enhance efficiency and meet performance targets. Support the team’s iterations, scope, capacity, risks, issues, and timelines. Participate in technical discussions and code reviews to maintain code quality, identify improvement opportunities, and ensure adherence to standards. Mentor and coach engineers, fostering their professional growth, assisting them in overcoming technical challenges. Drive continuous improvement initiatives, such as automation, tooling enhancements, and process optimizations, to increase productivity and operational efficiency. Who we’re looking for: Experience in designing, coding, and testing data platform/management tools and systems. Proven knowledge of software development principles and best practices. Proficient in programming languages commonly used in platform and data engineering, such as Python, Java, or Go. Exposure to data engineering concepts and associated technologies such as Airflow, BigQuery Kafka, batch and real-time data pipelines, ELT, and SQL. Knowledge of data modelling methodologies like Kimball or Data Vault 2.0 preferred. Exposure to Linux and shell scripting. Exposure to DevOps practices and techniques, such as Docker and CI/CD tools. Ability to communicate and collaborate effectively with business stakeholders. Experience working with open-source tooling across platform and integrations. Understanding of data governance best practices and principles. Experience in developing and integrating APIs, ensuring scalability and performance. Ability to ensure the operational stability and optimal performance of infrastructure and platform systems. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. Experience with DevOps practices and tools, such as Docker and CI/CD. Ability to work collaboratively and autonomously in a fast-paced environment. Willingness to learn new and complex technologies, and ability to share knowledge with the team. 3+ years of experience working with platform and data engineering environments. Bonus points for: Experience in using and managing Cloud infrastructure in AWS and/or GCP. Experience with Infrastructure as Code techniques, particularly Terraform. Exposure to platform engineering concepts or developer experience & tooling. Knowledge of SRE and observability best practices. What we offer: A hybrid and flexible approach to working. Transport options to help you get to and from work, including home pick-up and drop-off. Meals provided on site in our office. Flexible leave options including parental leave, family care leave and celebration leave. Insurances for you and your immediate family members. Programs to support mental, emotional, financial and physical health & wellbeing. Continuous learning and development opportunities to further your technical expertise. The values we live by: Our values are at the core of how we operate, treat each other, and make decisions. We believe that how we work is equally important as what we do to achieve our goals. This commitment is at the heart of everything we do, from the way we interact with colleagues to the way we serve our customers and communities. Our commitment to Diversity, Equity, and Inclusion: We are committed to providing a working environment that embraces and values diversity, equity and inclusion. We believe teams with diverse ideas and experiences are more creative, more e?ective and fuel disruptive thinking - be it cultural and ethnic backgrounds, gender identity, disability, age, sexual orientation, or any other identity or lived experience. We know diverse teams are critical to maintaining our success and driving new business opportunities. If you've got the skills, dedication and enthusiasm to learn but don't necessarily meet every single point on the job description, please still get in touch. REA Group in India: You might already recognise our logo. The REA brand does have an existing presence in India. In fact, we set up our new tech hub in Gurugram to be their neighbours! REA Group holds a controlling interest in REA India Pte. Ltd., operator of established brands Housing.com, Makaan.com and PropTiger.com, three of the country’s leading digital property marketplaces. Through our close connection to REA India, we’ve seen first-hand the incredible talent the country has to offer, and the huge opportunity to expand our global workforce. Our Cyber City Tech Center is an extension of REA Group; a satellite office working directly with our Australia HQ on local projects and tech delivery. All our brands, across the globe, connect regularly, learn from each other and collaborate on shared value initiatives.

Posted 6 days ago

Apply

2.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Role: Machine Learning Engineer - MLOps Job Overview As a Senior Software Development Engineer, Machine Learning (ML) Operations in the Technology & Engineering division, you will be responsible for enabling PitchBook’s Machine Learning teams and practitioners by providing tools that optimize all aspects of the Machine Learning Development Life Cycle (MLDLC). Your work will support projects in a variety of domains, including Generative AI (GenAI), Large Language Models (LLMs), Natural Language Processing (NLP), Classification, and Regression. Team Overview Your team’s goal will be to reduce friction and time-to-business-value for teams building Artificial Intelligence (AI) solutions at PitchBook. You will be essential in helping to build exceptional AI solutions relied upon and used by thousands of PitchBook customers every day. You will work with PitchBook professionals around the world with the collective goal of delighting our customers and growing our business. While demonstrating a growth mindset, you will be expected to continuously develop your expertise in a way that enhances PitchBook’s AI capabilities in a scalable and repeatable manner. You will be able to solve various common challenges faced in the MLDLC while providing technical guidance to less experienced peers. Outline Of Duties And Responsibilities Serve as a force multiplier for development teams by creating golden paths that remove roadblocks and improve ideation and innovation Collaborate with other engineers, product managers, and internal stakeholders in an Agile environment Design and deliver on projects end-to-end with little to no guidance Provide support to teams building and deploying AI applications by addressing common painpoints in the MLDLC Learn constantly and be passionate about discovering new tools, technologies, libraries, and frameworks (commercial and open source), that can be leveraged to improve PitchBook’s AI capabilities Support the vision and values of the company through role modeling and encouraging desired behaviors. Participate in various cross-functional company initiatives and projects as requested. Contribute to strategic planning in a way that ensures the team is building exceptional products that bring real business value. Evaluate frameworks, vendors, and tools that can be used to optimize processes and costs with minimal guidance. Experience, Skills And Qualifications Degree in Computer Science, Information Systems, Machine Learning, or a similar field preferred (or commensurate experience) +2 years of experience in hands-on development of Machine Learning algorithms +2 years of experience in hands-on deployment of Machine Learning services +2 years of experience supporting the entire MLDLC, including post-deployment operations such as monitoring and maintenance +2 years of experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP) Experience with at least 80%: PyTorch, Tensorflow, LangChain, scikit-learn, Redis, Elasticsearch, Amazon SageMaker, Google Vertex AI, Weights & Biases, FastAPI, Prometheus, Grafana, Apache Kafka, Apache Airflow, MLflow, KubeFlow Ability to break large, complex problems into well-defined steps, ensuring iterative development and continuous improvement Experience in cloud-native delivery, with a deep practical understanding of containerization technologies such as Kubernetes and Docker, and the ability to manage these across different regions. Proficiency in GitOps and creation/management of CI/CD pipelines Demonstrated experience building and using SQL/NoSQL databases Demonstrated experience with Python (Java is a plus) and other relevant programming languages and tools. Excellent problem-solving skills with a focus on innovation, efficiency, and scalability in a global context. Strong communication and collaboration skills, with the ability to engage effectively with internal customers across various cultures and regions. Ability to be a team player who can also work independently Experience working across multiple development teams is a plus Working Conditions The job conditions for this position are in a standard office setting. Employees in this position use PC and phone on an on-going basis throughout the day. Limited corporate travel may be required to remote offices or other business meetings and events. Morningstar India is an equal opportunity employer Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don't pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform, and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices. Evaluates new and current technologies using existing data architecture standards and frameworks. Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors. Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others. Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes. Serves as a function-wide subject matter expert in one or more areas of focus. Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle. Influences peers and project decision-makers to consider the use and application of leading-edge technologies. Advises junior architects and technologists. Required qualifications, capabilities, and skills: - Formal training or certification on software engineering concepts and 5+ years of applied experience. - Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.). - Practical cloud-based data architecture and deployment experience, preferably AWS. - Practical SQL development experiences in cloud-native relational databases, e.g. Snowflake, Athena, Postgres. - Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical, and physical data models deployed as operational vs. analytical data stores. - Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing. - Ability to tackle design and functionality problems independently with little to no oversight. - Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture. Preferred qualifications, capabilities, and skills: - Financial services experience, card and banking a big plus. - Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. - Practical experience in data mesh and/or data lake. - Practical experience in machine learning/AI with Python development a big plus. - Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin. - Knowledge of architecture assessments frameworks, e.g. Architecture Trade-off Analysis.,

Posted 6 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Iris's Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Preferred: Immediate joiners or 0-30 days notice period Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Profile: Data Engineer Experience: 5+ Years Who we are: Innovatics is a place where innovation blends with analytics. We, Innovatics, take pride in knowing the notion of bleeding-edge technologies, strategic business moves, and radiant business transformation. We deliver never thought before business growth opportunities and assist businesses to accelerate their digital transformation journey. About the role: We're looking for a Data Engineer who's passionate about delivering tangible results, who has a positive attitude, and who enjoys solving problems. Requirements: Technical Skills: 3+ years of experience in a Data Engineer role, Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc. Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/Hive Experience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event Hub Experience with GCP/Azure data factory/AWS Strong in SQL Scripting Experience with ETL tools Knowledge of Snowflake Data Warehouse Knowledge of Orchestration frameworks: Airflow/Luig Good to have knowledge of Data Quality Management frameworks Good to have knowledge of Master Data Management Self-learning abilities are a must Familiarity with upcoming new technologies is a strong plus. Should have a bachelor's degree in big data analytics, computer engineering, or a related field Personal Competency: Strong communication skills is a MUST Self-motivated, detail-oriented Strong organizational skills Ability to prioritize workloads and meet deadlines

Posted 6 days ago

Apply

5.0 years

0 Lacs

India

Remote

Where you’ll work: India (Remote) Engineering at GoTo We’re the trailblazers of remote work technology. We build powerful, flexible work software that empowers everyone to live their best life, at work and beyond. And blaze even more trails along the way. There’s ample room for growth – so you can blaze your own trail here too. When you join a GoTo product team, you’ll take on a key role in this process and see your work be used by millions of users worldwide. Your Day to Day As a Senior Data Engineer, you would be: Design and Develop Pipelines : Build robust, scalable, and efficient ETL/ELT data pipelines to process structured data from diverse sources. Big Data Processing : Develop and optimize large-scale data workflows using Apache Spark, with strong hands-on experience in building ETL pipelines. Cloud-Native Data Solutions : Architect and implement data solutions using AWS services such as S3, EMR, Lambda, and EKS. Data Governance : Manage and govern data using catalogs like Hive or Unity Catalog; ensure strong data lineage, access controls, and metadata management. Workflow Orchestration : Schedule, monitor, and orchestrate workflows using Apache Airflow or similar tools. Data Quality & Monitoring : Implement quality checks, logging, monitoring, and alerting to ensure pipeline reliability and visibility. Cross-Functional Collaboration : Partner with analysts, data scientists, and business stakeholders to deliver high-quality data for applications and enable self-service BI. Compliance & Security : Uphold best practices in data governance, security, and compliance across the data ecosystem. Mentorship & Standards : Mentor junior engineers and help evolve engineering practices including CI/CD, testing, and documentation. What We’re Looking For As a Senior Data Engineer, your background will look like: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering or software development, with a proven record of maintaining production-grade pipelines. Proficient in Python and SQL for data transformation and analytics. Strong expertise in Apache Spark , including data lake management, ACID transactions, schema enforcement/evolution, and time travel. In-depth knowledge of AWS services —especially S3, EMR, Lambda, and EKS—with a solid grasp of cloud architecture and security best practices. Solid data modeling skills (dimensional, normalized) and an understanding of data warehousing and lakehouse paradigms. Experience with BI tools like Tableau or Power BI . Familiar with setting up data quality , monitoring, and observability frameworks. Excellent communication and collaboration skills, with the ability to thrive in an agile and multicultural team environment. Nice to Have Experience working on the Databricks Platform Knowledge of Delta or Apache Iceberg file formats Passion for Machine Learning and AI; enthusiasm to explore and apply intelligent systems. What We Offer At GoTo, we believe in supporting our employees with a comprehensive range of benefits designed to fit your life—at work and beyond. Here are just some of the benefits and perks you can expect when you join our team: Comprehensive health benefits, life and disability insurance, and fertility and family-forming support program Generous paid time off, paid holidays, volunteer time off, and quarterly self-care days and no meeting days Tuition and reading reimbursement programs to support your continuous learning and professional growth Thrive Global Wellness Program, confidential Employee Assistance Program (EAP), as well as One to One Wellness Coaching Employee programs—including Employee Resource Groups (ERGs), GoTo Gives, and our charitable matching program—to amplify your connection and impact Registered Retirement Savings Plan (RRSP) to help you plan for your future GoTo performance bonus program to celebrate your impact and contributions Monthly remote work stipend to support your home office expenses At GoTo, you’ll find the flexibility, resources, and support you need to thrive—at work, at home, and everywhere in between. You’ll work towards a shared goal with an open-minded, cohesive team that’s greater than the sum of its parts. We’re committed to creating an inclusive space for everyone, because we know unique perspectives make us a stronger company and community. Join us and be part of a company that invests in your future, where together we’ll Be Real, Think Big, Move Fast, Keep Growing, and stay Customer Obsessed .Learn more.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies