Jobs
Interviews

3678 Redshift Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Here's the aligned and professionally formatted job description for the Lead AWS Data Engineer role: Role: Lead AWS Data Engineer Experience Level: 8 to 12 Years Notice Period: Immediate to 15 Days Location: [Specify if it's Remote/WFO/Hybrid – can be added if applicable] Job Description We are seeking a highly experienced Lead AWS Data Engineer to architect and manage scalable data solutions. In this role, you will lead a team of data engineers, collaborate with cross-functional teams, and establish best practices in data engineering to enable data-driven decision-making. A strong background in SQL, cloud platforms (AWS), and the insurance domain (P&C) is essential. Key Responsibilities: Design, build, and manage scalable data pipelines and ETL/ELT processes . Architect secure , scalable , and high-performance data platforms across cloud (AWS, GCP, Azure) and on-prem environments. Provide technical leadership , mentoring, and guidance to the data engineering team. Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver robust solutions. Optimize data workflows for performance , cost-efficiency , and reliability . Implement and enforce data governance , quality controls , and security standards . Monitor, troubleshoot, and maintain data infrastructure for high availability and reliability. Stay current with emerging data technologies and provide recommendations for adoption. Required Qualifications: 8–12 years of hands-on experience in data engineering , with strong expertise in: SQL , Python , Apache Spark ETL , data warehousing , and big data platforms Cloud data services : AWS Glue, Redshift, S3, Athena (GCP/BigQuery, Snowflake is a plus) Proven experience leading and mentoring data engineering teams. Strong knowledge of data modeling , pipeline orchestration tools (e.g., Airflow , Control-M ), and data governance practices. Excellent problem-solving , communication , and stakeholder management skills. Domain Expertise (Must Have): Strong background in the Insurance domain , specifically P&C (Property & Casualty) . Proven ability to align data solutions with business requirements in insurance/finance sectors. Education: Bachelor’s or Master’s degree in Computer Science , Information Technology , or a related field. Other Requirements: Excellent verbal and written English communication skills . Must be available to join within 15 days . Let me know if you'd like a version tailored for email outreach, job portal posting, or internal HR use.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About The Role As a Senior DevOps Engineer, you will be part of a highly talented DevOps team who manages the entire infrastructure for Yubi. You will work with development teams to understand their requirements, optimize them to reduce costs, create scripts for creating and configuring them, maintain and monitor the infrastructure. As a financial services firm, security is of utmost concern to our firm and you will ensure that all data handled by the entire platform, key configurations, passwords etc. are secure from leaks. You will ensure that the platform is scaled to meet our user needs and optimally performing at all times and our users get a world class experience using our software products. You will ensure that data, source code and configurations are adequately backed up and prevent loss of data. You will be well versed in tools to automate all such DevOps tasks. Responsibilities Troubleshoot web and backend applications and issues. Good understanding on multi-tier applications. Knowledge on AWS security, Application security, security best practices. SCA analysis, analyzing the security reports, sonarqube profiles and gates. Able to draft solutions to improve security based on reporting. Lead, drive and implement highly scalable, highly available and complex solutions. Up to date with latest devops tools and ecosystem. Excellent written and verbal communication. Requirements Bachelor’s/Master’s degree in Computer Science or equivalent work experience 3-6 years of working experience as DevOps engineer AWS Cloud expertise is a must and primary. Azure/GCP cloud knowledge is a plus. Extensive knowledge and experience with major AWS Services. Advanced AWS networking setup, routing, vpn, cross account networking, use of proxies. Experience with AWS multi account infrastructure. Infrastructure as code using cloudformation or terraform. Containerization – docker/kubernetes/ecs/fargate. Configuration management tools such as chef/ansible/salt. CI/CD - Jenkins/Code pipeline/Code deploy. Basic expertise in scripting languages such as shell, python or nodejs. Adept at Continuous Integration/Continuous Deployment Experience working with source code repos like Gitlab, Github or Bitbucket. Monitoring tools: cloudwatch agent, prometheus, grafana, newrelic, Dynatrace, datadog, openapm..etc ELK knowledge is a plus. Knowledge on chat-ops. Adept at using various operating systems, Windows, Mac and Linux Expertise in using command line tools, AWS CLI, Git, or other programming aws apis. Experience with both sql (rds postgres/mysql) and no-sql databases (mongo), data warehousing (redshift), datalake. Knowledge and experience in instrumentation, metrics and monitoring concepts. Benefits YUBI is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About The Role As a Senior DevOps Engineer, you will be part of a highly talented DevOps team who manages the entire infrastructure for Yubi. You will work with development teams to understand their requirements, optimize them to reduce costs, create scripts for creating and configuring them, maintain and monitor the infrastructure. As a financial services firm, security is of utmost concern to our firm and you will ensure that all data handled by the entire platform, key configurations, passwords etc. are secure from leaks. You will ensure that the platform is scaled to meet our user needs and optimally performing at all times and our users get a world class experience using our software products. You will ensure that data, source code and configurations are adequately backed up and prevent loss of data. You will be well versed in tools to automate all such DevOps tasks. Responsibilities Troubleshoot web and backend applications and issues. Good understanding on multi-tier applications. Knowledge on AWS security, Application security, security best practices. SCA analysis, analyzing the security reports, sonarqube profiles and gates. Able to draft solutions to improve security based on reporting. Lead, drive and implement highly scalable, highly available and complex solutions. Up to date with latest devops tools and ecosystem. Excellent written and verbal communication. Requirements Bachelor’s/Master’s degree in Computer Science or equivalent work experience 3-6 years of working experience as DevOps engineer AWS Cloud expertise is a must and primary. Azure/GCP cloud knowledge is a plus. Extensive knowledge and experience with major AWS Services. Advanced AWS networking setup, routing, vpn, cross account networking, use of proxies. Experience with AWS multi account infrastructure. Infrastructure as code using cloudformation or terraform. Containerization – docker/kubernetes/ecs/fargate. Configuration management tools such as chef/ansible/salt. CI/CD - Jenkins/Code pipeline/Code deploy. Basic expertise in scripting languages such as shell, python or nodejs. Adept at Continuous Integration/Continuous Deployment Experience working with source code repos like Gitlab, Github or Bitbucket. Monitoring tools: cloudwatch agent, prometheus, grafana, newrelic, Dynatrace, datadog, openapm..etc ELK knowledge is a plus. Knowledge on chat-ops. Adept at using various operating systems, Windows, Mac and Linux Expertise in using command line tools, AWS CLI, Git, or other programming aws apis. Experience with both sql (rds postgres/mysql) and no-sql databases (mongo), data warehousing (redshift), datalake. Knowledge and experience in instrumentation, metrics and monitoring concepts. Benefits YUBI is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Data is the new Oil !! Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build big-data solutions that process billions of records a day in a scalable fashion using AWS technologies? Do you want to create the next-generation tools for intuitive data access? If so, Amazon Finance Technology (FinTech) is for you! Amazon's Financial Technology team is looking for a passionate, results-oriented, inventive Data Engineers who can work on massively scalable and distributed systems. The candidate thrives in a fast-paced environment, understands how to deal with large sets of data and transactions and will help us deliver on a new generation of software, leveraging Amazon Web Services. The candidate is passionate about technology and wants to be involved with real business problems. Our platform serves Amazon's finance, tax and accounting functions across the globe. We are looking for an experienced Data Engineer to join the FinTech Tax teams that builds and operate technology for supporting Amazon's tax compliance and audits needs worldwide. Our teams are responsible for building a big-data platform to stream billions of transactions a day, process and to organize them into output required for tax compliance globally. With Amazon's data-driven culture, our platform will provide accurate, timely and actionable data for our customers. As a member of this team, your mission will be to design, develop, document and support massively scalable, distributed real time systems. Using Python, Java, Object oriented design patterns, distributed databases and other innovative storage techniques, you will build and deliver software systems that support complex and rapidly evolving business requirements. You will communicate your ideas effectively to achieve the right outcome for your team and customer. Your code, design, and implementation decisions will set a great example to other engineers. As a senior engineer, you will provide guidance and support for other engineers with industry best practices and direction. You will also have the opportunity to impact the technical decisions in the broader organisation as well as mentor other engineers in the team. Key job responsibilities This is an exciting opportunity for a seasoned Data Engineer to take on a pivotal role in the architecture, design, implementation, and deployment of large-scale, critical, and complex financial applications. You will push your design and architecture skills to the limit by owning all aspects of end-to-end solutions. Leveraging agile methodologies, you will iteratively build and deliver high-quality results in a fast-paced environment. With strong verbal and written communication abilities, self-motivation, and a collaborative mindset, you will work across Amazon engineering teams and business teams globally to plan, design, execute, and implement this new platform across multiple geographies. Throughout the project lifecycle, you will review requirements, design services that lay the foundation for the new technology platform, integrate with existing architectures, develop and test code (Python, Scala, Java), and deliver seamless implementations for Global Tax customers. In a hands-on role, you will manage day-to-day activities, participate in designs, design reviews, and code reviews with the engineering team. Utilizing AWS technologies such as EC2, RDS/DynamoDB/RedShift, S3, EMR, Glue and QuickSight you will build solutions. You will design and code technical solutions to deliver value to tax customers. Additionally, you will contribute to a suite of tools hosted on the AWS infrastructure, working with a variety of tools across the spectrum of the software development lifecycle. About The Team The FinTech International Tax Compliance (FIT Compliance) team oversees the Tax Data Warehouse platform, a large-scale data platform and reporting solution designed for indirect tax compliance across Amazon and other organizations. This platform enables businesses to adhere to mandatory tax regulations, drive data accuracy, and ensure audit readiness, providing consistent and reliable data to tax teams in the EMEA regions. As Amazon expands its operations in EMEA, the FIT Compliance team plays a crucial role in delivering mandatory tax reporting obligations, facilitating these launches. Furthermore, their charter encompasses building solutions to meet evolving tax legislation changes, audit requests, and technology requirements globally, such as International Recon and Digital Service Tax (DST). The team is also investing in building the next generation of strategic platforms such as Unified tax ledger(UTL) and Golden data Set(GDS). Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2883904

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Role Overview: The Tableau Developer will be responsible for creating data visualizations, dashboards, and reporting solutions using Tableau Desktop, Server, and Prep to support business analytics and operational reporting needs. What you’ll do: Design and develop interactive dashboards and data visualizations using Tableau. Develop data models, calculations, and KPIs in line with business requirements. Connect to diverse data sources (AWS Redshift, RDS, flat files, APIs) and optimize data extracts. Collaborate with business and data engineering teams to define reporting specifications. Optimize report performance and implement best practices for visualization and user experience. Manage Tableau Server content deployment and governance standards. What you’ll bring: 3-6 years of Tableau development experience. Strong knowledge of data visualization best practices and dashboard performance tuning. Proficiency in SQL and familiarity with cloud-based data sources (AWS preferred). Experience with Tableau Prep and Tableau Server management is a plus. Additional Skills: Strong communication skills, both verbal and written, with the ability to structure thoughts logically during discussions and presentations Capability to simplify complex concepts into easily understandable frameworks and presentations Proficiency in working within a virtual global team environment, contributing to the timely delivery of multiple projects Travel to other offices as required to collaborate with clients and internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Business Technology ZS’s Technology group focuses on scalable strategies, assets and accelerators that deliver to our clients enterprise-wide transformation via cutting-edge technology. We leverage digital and technology solutions to optimize business processes, enhance decision-making, and drive innovation. Our services include, but are not limited to, Digital and Technology advisory, Product and Platform development and Data, Analytics and AI implementation. What you’ll do: Work with business stakeholders to understand their business needs. Create data pipelines that extract, transform, and load (ETL) from various sources into a usable format in a Data warehouse. Clean, filter, and validate data to ensure it meets quality and format standards. Develop data model objects (tables, views) to transform the data into unified format for downstream consumption. Expert in monitoring, controlling, configuring, and maintaining processes in cloud data platform. Optimize data pipelines and data storage for performance and efficiency. Participate in code reviews and provide meaningful feedback to other team members. Provide technical support and troubleshoot issue(s). What you’ll bring : Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience. Experience working in the AWS cloud platform. Data engineer with expertise in developing big data and data warehouse platforms. Experience working with structured and semi-structured data. Expertise in developing big data solutions, ETL/ELT pipelines for data ingestion, data transformation, and optimization techniques. Experience working directly with technical and business teams. Able to create technical documentation. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. AWS (Big Data services) - S3, Glue, Athena, EMR Programming - Python, Spark, SQL, Mulesoft,Talend, Dbt Data warehouse - ETL, Redshift / Snowflake Additional Skills : Experience in data modeling. Certified in AWS platform for Data Engineer skills. Experience with ITSM processes/tools such as ServiceNow, Jira Understanding of Spark, Hive, Kafka, Kinesis, Spark Streaming, and Airflow Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description At AWS, we are looking for a Delivery Practice Manager with a successful record of leading enterprise customers through a variety of transformative projects involving IT Strategy, distributed architecture, and hybrid cloud operations. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Professional Services engage in a wide variety of projects for customers and partners, providing collective experience from across the AWS customer base and are obsessed about strong success for the Customer. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based upon customer needs. 10034 Key job responsibilities Engage customers - collaborate with enterprise sales managers to develop strong customer and partner relationships and build a growing business in a geographic territory, driving AWS adoption in key markets and accounts. Drive infrastructure engagements - including short on-site projects proving the value of AWS services to support new distributed computing models. Coach and teach - collaborate with AWS field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Amazon Databases – RDS/Aurora/DynamoDB/Redshift, Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Identity and Access Management(IAM), etc. Deliver value - lead high quality delivery of a variety of customized engagements with partners and enterprise customers in the commercial and public sectors. Lead great people - attract top IT architecture talent to build high performing teams of consultants with superior technical depth, and customer relationship skills Be a customer advocate - Work with AWS engineering teams to convey partner and enterprise customer feedback as input to AWS technology roadmaps Build organization assets – identify patterns and implement solutions that can be leveraged across customer base. Improve productivity through tooling and process improvements. About The Team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Bachelor’s degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics, Physics, or a related field. 15+ years of IT implementation and/or delivery experience, with 5+ years working in an IT Professional Services and/or consulting organization; and 5+ years of direct people management leading a team of consultants. Deep understanding of cloud computing, adoption strategy, transition challenges. Experience managing a consulting practice or teams responsible for KRAs. Ability to travel to client locations to deliver professional services as needed Preferred Qualifications Demonstrated ability to think strategically about business, product, and technical challenges. Vertical industry sales and delivery experience of contemporary services and solutions.Experience with design of modern, scalable delivery models for technology consulting services. Business development experience including complex agreements w/ integrators and ISVs .International sales and delivery experience with global F500 enterprise customers and partners Direct people management experience leading a team of at least 20 or manager of manager experience in a consulting practice. Use of AWS services in distributed environments with Microsoft, IBM, Oracle, HP, SAP etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Telangana Job ID: A3037856

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps/MLOps Expert Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 3 years (Required) Data Engineering : 6 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function : 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position Overview This role is responsible for defining and delivering ZURU’s next-generation data architecture—built for global scalability, real-time analytics, and AI enablement. You will lead the unification of fragmented data systems into a cohesive, cloud-native platform that supports advanced business intelligence and decision-making. Sitting at the intersection of data strategy, engineering, and commercial enablement, this role demands both deep technical acumen and strong cross-functional influence. You will drive the vision and implementation of robust data infrastructure, champion governance standards, and embed a culture of data excellence across the organisation. Position Impact In the first six months, the Head of Data Architecture will gain deep understanding of ZURU’s operating model, technology stack, and data fragmentation challenges. You’ll conduct a comprehensive review of current architecture, identifying performance gaps, security concerns, and integration challenges across systems like SAP, Odoo, POS, and marketing platforms. By month twelve, you’ll have delivered a fully aligned architecture roadmap—implementing cloud-native infrastructure, data governance standards, and scalable models and pipelines to support AI and analytics. You will have stood up a Centre of Excellence for Data, formalised global data team structures, and established yourself as a trusted partner to senior leadership. What are you Going to do? Lead Global Data Architecture: Own the design, evolution, and delivery of ZURU’s enterprise data architecture across cloud and hybrid environments. Consolidate Core Systems: Unify data sources across SAP, Odoo, POS, IoT, and media into a single analytical platform optimised for business value. Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, and Snowflake. Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalise AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including demand forecasting, pricing models, and personalisation. Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Marketing, Finance, Supply Chain, R&D, and Technology. Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Establish Data Leadership: Build and mentor a high-performing data team across India and NZ, and drive alignment across engineering, analytics, and governance. Vendor and Tool Strategy: Evaluate external tools and partners to ensure the data ecosystem is future-ready, scalable, and cost-effective. What are we Looking for? 8+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines (e.g. Kafka, Airflow, DBT, Fivetran) Strong knowledge of ERP data models, especially SAP and Odoo Experience with data governance, compliance (GDPR/CCPA), metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Experience building and leading cross-regional data teams Tools & Technologies Cloud Platforms: AWS (S3, EMR, Kinesis, Glue), Azure (Synapse, ADLS), GCP Big Data: Hadoop, Apache Spark, Apache Flink Streaming: Kafka, Kinesis, Pub/Sub Orchestration: Airflow, Prefect, Dagster, DBT Warehousing: Snowflake, Redshift, BigQuery, Databricks Delta NoSQL: Cassandra, DynamoDB, HBase, Redis Query Engines: Presto/Trino, Athena IaC & CI/CD: Terraform, GitLab Actions Monitoring: Prometheus, Grafana, ELK, OpenTelemetry Security/Governance: IAM, TLS, KMS, Amundsen, DataHub, Collibra, DBT for lineage What do we Offer? 💰 Competitive compensation 💰 Annual Performance Bonus ⌛️ 5 Working Days with Flexible Working Hours 🚑 Medical Insurance for self & family 🚩 Training & skill development programs 🤘🏼 Work with the Global team, Make the most of the diverse knowledge 🍕 Several discussions over Multiple Pizza Parties A lot more! Come and discover us!

Posted 2 weeks ago

Apply

7.0 - 10.0 years

8 - 16 Lacs

India

On-site

Role: Sr. Data Engineer Location: Indore, Madhya Pradesh Experience: 7-10 Years Job Type: Full-time Job Summary: As a Data Engineer with a focus on Python, you'll play a crucial role in designing, developing, and maintaining data pipelines and ETL processes. You will work with large-scale datasets and leverage modern tools like PySpark, Airflow, and AWS Glue to automate and orchestrate data processes. Your work will support critical decision-making by ensuring data accuracy, accessibility, and efficiency across the organization Key Responsibilities: Design, build, and maintain scalable data pipelines using Python. Develop ETL processes for extracting, transforming, and loading data. Optimise SQL queries and database schemas for enhanced performance. Collaborate with data scientists, analysts, and stakeholders to understand data needs. Implement and monitor data quality checks to resolve any issues. Automate data processing tasks with Python scripts and tools. Ensure data security, integrity, and regulatory compliance. Document data processes, workflows, and system designs. Primary Skills: Python Proficiency: Experience with Python, including libraries such as Pandas, NumPy, and SQLAlchemy. PySpark: Hands-on experience in distributed data processing using PySpark. AWS Glue: Practical knowledge of AWS Glue for building serverless ETL pipelines. SQL Expertise: Advanced knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Data Pipeline Development: Proven experience in building and maintaining data pipeline and ETL processes. Cloud Data Platforms: Familiarity with cloud-based platforms like AWS Redshift, Google BigQuery, or Azure Synapse Data Warehousing: Knowledge of data warehousing and data modelling best practices. Version Control: Proficiency with Git. Preferred Skills: Big Data Technologies: Experience with tools like Hadoop or Kafka Data Visualization: Familiarity with visualisation tools (e.g., Tableau, Power BI). DevOps Practices: Understanding of CI/CD pipelines and DevOps practices. Data Governance: Knowledge of data governance and security best practices. Job Type: Full-time Pay: ₹800,000.00 - ₹1,600,000.00 per year Work Location: In person Application Deadline: 30/07/2025 Expected Start Date: 19/07/2025

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Overview As Senior Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling - documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data str/cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered Leader of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As Leader of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “production Alize” data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 10+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and Advanced Analytics. 4+ years of experience with Power BI, Tableau, Data Warehousing, and Data Analytics tools. 4+ years of experience in Platform optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building Symantec Models. Proficient in DAX queries, Copilot and AI Skills Experience building/operating highly available, distributed systems of data Visualization . Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with version control systems like Github and deployment & CI tools. Knowledge of Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with Augmented Analytics tools is Plus (such as ThoughtSpot, Tellius).

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Junior DBA (Oracle, SQL Server, Cloud DBs Experience : 1–3 years (or relevant internship/training experience) Role Overview Motivated Junior Database Administrator. Support daily operations across a diverse environment including Oracle, SQL Server, PostgreSQL, MySQL, and Amazon Redshift in a hybrid (on-prem and cloud) environment. Key Responsibilities Assist in database backup, recovery, patching, and general maintenance. Monitor health and availability of production and non-production databases. Support cloning and refreshes of development/test environments. Participate in database deployment activities and version control processes. Maintain scripts for automation of routine DBA tasks (e.g., health checks, cleanup). Help maintain compliance with internal security policies and SOX access audits. Document standard operating procedures and support playbooks. Assist with Oracle EBS environment maintenance and troubleshooting. Familiarity with backup/recovery tools (e.g., RMAN, SQL Server Maintenance Plans) Basic scripting knowledge (Shell, PowerShell, or Python)

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Junior DBA (Oracle, SQL Server, Cloud DBs Experience : 1–3 years (or relevant internship/training experience) Role Overview Motivated Junior Database Administrator. Support daily operations across a diverse environment including Oracle, SQL Server, PostgreSQL, MySQL, and Amazon Redshift in a hybrid (on-prem and cloud) environment. Key Responsibilities Assist in database backup, recovery, patching, and general maintenance. Monitor health and availability of production and non-production databases. Support cloning and refreshes of development/test environments. Participate in database deployment activities and version control processes. Maintain scripts for automation of routine DBA tasks (e.g., health checks, cleanup). Help maintain compliance with internal security policies and SOX access audits. Document standard operating procedures and support playbooks. Assist with Oracle EBS environment maintenance and troubleshooting. Familiarity with backup/recovery tools (e.g., RMAN, SQL Server Maintenance Plans) Basic scripting knowledge (Shell, PowerShell, or Python)

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Mohali district, India

On-site

Job Title: DevOps/MLOps Expert Location: Mohali (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert , you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform , CloudFormation , or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow , Prefect , or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks : TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools : MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving : TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking : MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms : AWS, Azure, or GCP with relevant certifications • Containerization : Docker, Kubernetes (CKA/CKAD preferred) • CI/CD : Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC : Terraform, CloudFormation, Pulumi, Ansible • Monitoring : Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools : Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time : Apache Kafka, Apache Spark, Apache Flink, Redis • Databases : PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing : Snowflake, BigQuery, Redshift, Databricks • Data Versioning : DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security : Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing : Experience with petabyte-scale data processing and real-time analytics • Performance Optimization : Advanced system optimization, distributed computing, caching strategies • API Development : REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams , product managers , and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics , SLAs , and enterprise ROI Growth Opportunities • Career Path : Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth : Work with cutting-edge enterprise AI/ML technologies • Leadership : Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure : Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime : Maintain 99.9%+ availability for enterprise clients • Deployment Frequency : Enable daily deployments with zero downtime • Performance : Ensure optimal response times and system performance • Cost Optimization : Achieve 20-30% annual infrastructure cost reduction • Security : Zero security incidents and full compliance adherence Business Impact • Time to Market : Reduce deployment cycles and improve development velocity • Client Satisfaction : Maintain 95%+ enterprise client satisfaction scores • Team Productivity : Improve engineering team efficiency by 40%+ • Scalability : Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description Omio’s vision is to enable people to travel seamlessly anywhere, anyway. We are bringing all global transport into a single distribution system and creating end-to-end magical consumer journeys. 1 billion users use Omio, doing over a billion searches a year. With Omio you can compare and book trains, buses, ferries and flights globally, offering transparent pricing and easy booking, Omio makes travel planning simple, flexible, and personal. Omio is available in 45 countries, 32 languages, 33 currencies, and collaborating with over 2,300 providers to offer millions of unique journeys and bookable travel modes. With 12,000 local transport operators and over 10 million unique routes searched each year, and 240 searchable countries, including our discovery product "Rome2Rio", which helps trip planners coordinate their travel anywhere in the world. Our offices are based in Berlin, Prague, Melbourne, Brazil, Bangalore, and London. We are a growing team of more than 430 passionate employees from more than 50 countries who share the same vision: to create a single tool to help send travellers almost anywhere in the world. Job Description Job Description As a Principal Software Engineer, you will design, implement and evangelise for cross-team projects that both have a company-wide business impact and at the same time, pushing our technical status-quo forward. We expect you to be mindful, approachable, experienced in a wide range of technologies and an expert in designing & building highly available distributed systems. If this excites you instead of scaring you, you will fit nicely! Our Technologies Java, NodeJS Python, Go Kubernetes, Docker, Jenkins, Terraform Google Cloud Platform (Pub/Sub, GCS, BigQuery, BigTable, Dataflow, ...), AWS (RedShift, Kinesis, S3) MySQL, PostgreSQL, Couchbase, Clickhouse Check more details: https://omio.tech/radar Qualifications Qualifications Who you are Passionate about technology but also interested in the business impact of projects you work on Able to gather requirements, create a delivery plan and handle alignment between different teams/departments to deliver your project, you will have end-to-end responsibility. Able to lead internal discussions about the direction of major areas of technology in Omio Able to mentor & inspire engineers across the organisation As a thoughtful leader and evangelist, you work out architecture, engineering best practices and solutions across the Tech department. We expect you to contribute to both internal and external tech-talks representing Omio You provide full and detailed analysis, insightful comments and recommendations for action. You contribute to the development of the tribe strategy across the organisation Your Skill Set 10+ years in Software Engineering position Deep knowledge in the JVM environment, large-scale distributed systems and cloud solutions Solid communication competences to transport vision, align teams and to advise the management team, good English language skills Experience in new product development and project management Demonstrated ability to interact and collaborate with all levels of internal and external customers Proactive problem solving - you resolve obstacles before they can become problems Method expert, ability to quickly interpret an extensive variety of technical information and find resolution and the method to an issue quickly. Additional Information Learn more about Omio Engineering and our Team: https://medium.com/omio-engineering Here at Omio, we know that no two people are alike, and that’s a great thing. Diversity in culture, thought and background has been key to growing our product beyond borders to reach millions of users from all over the world. That’s why we believe in giving equal opportunity to all, regardless of race, gender, religion, sexual orientation, age, or disability. Hiring process and background checks At Omio, we work in partnership with Giant Screening, once a job offer has been accepted, Giant will be engaged to carry out background screening. Giant will reach out to you via email and occasionally via telephone/text message so that they can gather all relevant information required. Consent will be requested prior to any information being passed to our services company. What’s in it for you? A competitive and attractive compensation package Opportunity to develop your skills on a new level A generous pension scheme A diverse team of more than 45 nationality Develop maintainable solutions for complex problems with broad impact on the business as a whole Make decisions that will have a direct impact on the long-term success of Omio Diversity makes us stronger We value diversity and welcome all applicants regardless of ethnicity, religion, national origin, sexual orientation, gender, gender identity, age or disability.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for. The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies. We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more about our What and our Why and how we can work together . ExxonMobil’s affiliates in India ExxonMobil’s affiliates have offices in India in Bengaluru, Mumbai and the National Capital Region. ExxonMobil’s affiliates in India supporting the Product Solutions business engage in the marketing, sales and distribution of performance as well as specialty products across chemicals and lubricants businesses. The India planning teams are also embedded with global business units for business planning and analytics. ExxonMobil’s LNG affiliate in India supporting the upstream business provides consultant services for other ExxonMobil upstream affiliates and conducts LNG market-development activities. The Global Business Center - Technology Center provides a range of technical and business support services for ExxonMobil’s operations around the globe. ExxonMobil strives to make a positive contribution to the communities where we operate and its affiliates support a range of education, health and community-building programs in India. Read more about our Corporate Responsibility Framework. To know more about ExxonMobil in India, visit ExxonMobil India and the Energy Factor India. What Role You Will Play In Our Team A highly analytical and technically skilled FP&A Data Integration Analyst to join our Finance team. This role bridges financial planning and data engineering by integrating data from multiple systems to support forecasting, budgeting, and strategic decision-making.The ideal candidate will have a strong foundation in finance, data modeling, and ETL (Extract, Transform, Load) processes. What You Will Do Key Responsibilities: Design, build, and maintain data pipelines to integrate financial and operational data from various sources (ERP, CRM, BI tools, etc.). Collaborate with IT, data engineering, and business teams to ensure seamless data flow into FP&A models and dashboards. Support the budgeting, forecasting, and long-range planning processes with integrated, real-time data. Develop and maintain financial models and dashboards using tools like Power BI, Tableau, or Looker. Perform data validation and reconciliation to ensure accuracy and consistency across systems. Automate recurring financial reports and data refresh processes. Conduct variance analysis and scenario modeling using integrated datasets. Identify opportunities to improve data quality, structure, and accessibility for financial analysis. Document data flows, integration logic, and financial data definitions. About You Required Skills and Qualifications: Bachelor’s degree in Finance, Accounting, Data Science, Computer Science, or a related field. Minimum 3 years of experience in FP&A, data analytics, or data integration roles. Proficiency in SQL and experience with ETL tools (e.g., Alteryx, Talend, Apache Airflow). Strong understanding of financial principles and planning processes. Experience with ERP systems (Job Title: Data Analyst (FP&A) Strong understanding of financial principles and planning processes. Experience with ERP systems (e.g., SAP, Oracle, NetSuite) and data warehouses (e.g., Snowflake, Redshift). Advanced Excel skills and experience with BI tools (e.g., Power BI, Tableau). Familiarity with Python or R for data manipulation is a plus. Excellent communication and cross-functional collaboration skills. Preferred skills / experience: Familiarity with cloud platforms (AWS, Azure, or GCP) and data warehouses (e.g., Snowflake, Redshift) Proficiency in Python or R for data manipulation and automation Exposure to financial planning tools (e.g., Workday adaptive, Anaplan, Adaptive Insights, oracle Hyperion) Background in SaaS, e-commerce, or data-driven industries Strong understanding of financial reporting standards and compliance Ability to work in a fast-paced, cross-functional environment and data warehouses (e.g., Snowflake, Redshift). Advanced Excel skills and experience with BI tools (e.g., Power BI, Tableau). Familiarity with Python or R for data manipulation is a plus. Excellent communication and cross-functional collaboration skills. Your Benefits An ExxonMobil career is one designed to last. Our commitment to you runs deep: our employees grow personally and professionally, with benefits built on our core categories of health, security, finance and life. We offer you: Competitive compensation Medical plans, maternity leave and benefits, life, accidental death and dismemberment benefits Retirement benefits Global networking & cross-functional opportunities Annual vacations & holidays Day care assistance program Training and development program Tuition assistance program Workplace flexibility policy Relocation program Transportation facility Please note benefits may change from time to time without notice, subject to applicable laws. The benefits programs are based on the Company’s eligibility guidelines. EEO Statement ExxonMobil is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin or disability status. Business solicitation and recruiting scams ExxonMobil does not use recruiting or placement agencies that charge candidates an advance fee of any kind (e.g., placement fees, immigration processing fees, etc.). Follow the LINK to understand more about recruitment scams in the name of ExxonMobil. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Job Title: Senior Backend Engineer (Python) Time Overlap Requirement: 4 hours overlap with Eastern Time (9 AM–1 PM EST) Key Responsibilities: Design, develop, test, and deploy scalable and maintainable backend applications capable of handling large volumes of data Build and enhance RESTful APIs, contributing to microservices architecture and overall API design Lead code reviews and mentor junior developers to elevate team performance Stay updated with new technologies and contribute to a culture of continuous learning Create and maintain high-quality technical documentation and system architecture diagrams Required Skills: Minimum 5 years of hands-on experience in Python development Strong experience working with RESTful APIs, including best practices for caching, JWT authentication, load testing, and API frameworks Proficiency in Linux/Unix environments and shell scripting (Bash) Solid understanding of microservices architecture, containerization (Docker), Kubernetes, and serverless platforms like AWS Lambda Proven experience working with large datasets and database technologies such as SQL, MySQL, PostgreSQL, Amazon Aurora, Redis, Redshift, or BigQuery Commitment to comprehensive testing practices, including unit, integration, and functional testing Knowledge of API security principles and performance optimization techniques Skilled in Git, including branching, conflict resolution, and managing pull requests Strong ability to debug and resolve performance issues in both application and database layers Demonstrated history of mentoring and guiding junior engineers Enthusiasm for writing clean, well-documented, and maintainable code Nice-to-Have Skills: Familiarity with JavaScript or other programming languages such as Go or C/C++ Exposure to frontend concepts or frameworks (e.g., React.js, Web Components) Experience with AWS, Google Cloud Platform, or Microsoft Azure A passion for data-centric software and building tools that leverage data effectively

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Position: Sr Data Operations Years of Experience – 6-8 Years Job Location: S.B Road –Pune, For other locations (Remote) The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Deliver and Operate team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a highly skilled and detail-oriented Software Engineer II for Data Operations team to maintain our data infrastructure, pipelines, and work-flows. You will play a key role in ensuring the smooth ingestion, transformation, validation, and delivery of data across systems. This role is ideal for someone with a strong understanding of data engineering and operational best practices who thrives in high-availability environments. Responsibilities & Skills You should: Monitor and maintain data pipelines and ETL processes to ensure reliability and performance. Automate routine data operations tasks and optimize workflows for scalability and efficiency. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Collaborate with data engineering, analytics, and DevOps teams to support data infrastructure. Implement monitoring, alerting, and logging systems for data pipelines. Maintain and improve data governance, access controls, and compliance with data policies. Support deployment and configuration of data tools, services, and platforms. Participate in on-call rotation and incident response related to data system outages or failures. Required Skills : 5+ years of experience in data operations, data engineering, or a related role. Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL). Proficiency with data pipeline tools (e.g., Apache Airflow). Experience with cloud platforms (AWS, GCP) and cloud-based data services (e.g., Redshift, BigQuery). Familiarity with scripting languages such as Python, Bash, or Shell. Knowledge of version control (e.g., Git) and CI/CD workflows. Qualifications Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Experience with data observability tools (e.g., Splunk, DataDog). Background in DevOps or SRE with focus on data systems. Exposure to infrastructure-as-code (e.g., Terraform, CloudFormation). Knowledge of streaming data platforms (e.g., Kafka, Spark Streaming).

Posted 2 weeks ago

Apply

4.0 years

10 - 15 Lacs

Hyderabad, Telangana, India

On-site

About The Opportunity Join a dynamic leader in the cloud data engineering sector, specializing in advanced data solutions and real-time analytics for enterprise clients. This role offers an on-site opportunity in India to work on cutting-edge AWS infrastructures where innovation is at the forefront of business transformation. This opportunity is ideal for professionals with 4+ years of proven experience in AWS data engineering, Python, and PySpark. You will contribute to designing, optimizing, and maintaining scalable data pipelines that drive business intelligence and operational efficiency. Role & Responsibilities Design, develop, and maintain robust AWS-based data pipelines using Python and PySpark. Implement efficient ETL processes, ensuring data integrity and optimal performance across AWS services (S3, Glue, EMR, Redshift). Collaborate with cross-functional teams to integrate data engineering solutions within broader business-critical applications. Troubleshoot and optimize existing data workflows, ensuring high availability, scalability, and security of cloud solutions. Exercise best practices in coding, version control, and documentation to maintain a high standard of engineering excellence. Skills & Qualifications Must-Have: 4+ years of hands-on experience in AWS data engineering with proven expertise in Python and PySpark. Must-Have: Proficiency in developing and maintaining ETL processes, using AWS services such as S3, Glue, EMR, and Redshift. Must-Have: Strong problem-solving skills and a deep understanding of data modeling, data warehousing concepts, and performance optimization. Preferred: Experience with AWS Lambda, Airflow, or similar cloud orchestration tools. Preferred: Familiarity with containerization, CI/CD pipelines, and infrastructure-as-code (e.g., CloudFormation, Terraform). Preferred: AWS certifications or equivalent cloud credentials. Benefits & Culture Highlights Work in a collaborative, fast-paced environment that rewards innovation and continuous improvement. Enjoy opportunities for professional growth and skill development through ongoing projects and training. Benefit from competitive compensation and the ability to work on transformative cloud technology solutions. Skills: glue,redshift,performance optimization,infrastructure-as-code,emr,cloudformation,ci/cd pipelines,data warehousing,aws,terraform,cloud services,aws lambda,pyspark,etl processes,python,data modeling,s3,containerization,etl,airflow

Posted 2 weeks ago

Apply

7.0 - 10.0 years

8 - 16 Lacs

Vijay Nagar, Indore, Madhya Pradesh

On-site

Role: Sr. Data Engineer Location: Indore, Madhya Pradesh Experience: 7-10 Years Job Type: Full-time Job Summary: As a Data Engineer with a focus on Python, you'll play a crucial role in designing, developing, and maintaining data pipelines and ETL processes. You will work with large-scale datasets and leverage modern tools like PySpark, Airflow, and AWS Glue to automate and orchestrate data processes. Your work will support critical decision-making by ensuring data accuracy, accessibility, and efficiency across the organization Key Responsibilities: Design, build, and maintain scalable data pipelines using Python. Develop ETL processes for extracting, transforming, and loading data. Optimise SQL queries and database schemas for enhanced performance. Collaborate with data scientists, analysts, and stakeholders to understand data needs. Implement and monitor data quality checks to resolve any issues. Automate data processing tasks with Python scripts and tools. Ensure data security, integrity, and regulatory compliance. Document data processes, workflows, and system designs. Primary Skills: Python Proficiency: Experience with Python, including libraries such as Pandas, NumPy, and SQLAlchemy. PySpark: Hands-on experience in distributed data processing using PySpark. AWS Glue: Practical knowledge of AWS Glue for building serverless ETL pipelines. SQL Expertise: Advanced knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Data Pipeline Development: Proven experience in building and maintaining data pipeline and ETL processes. Cloud Data Platforms: Familiarity with cloud-based platforms like AWS Redshift, Google BigQuery, or Azure Synapse Data Warehousing: Knowledge of data warehousing and data modelling best practices. Version Control: Proficiency with Git. Preferred Skills: Big Data Technologies: Experience with tools like Hadoop or Kafka Data Visualization: Familiarity with visualisation tools (e.g., Tableau, Power BI). DevOps Practices: Understanding of CI/CD pipelines and DevOps practices. Data Governance: Knowledge of data governance and security best practices. Job Type: Full-time Pay: ₹800,000.00 - ₹1,600,000.00 per year Work Location: In person Application Deadline: 30/07/2025 Expected Start Date: 19/07/2025

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role : Lead Data Analyst Experience : 5 To 10 Years Location : Hyderabad ( Hybrid ) Key Responsibilities : 1. Hands-On Data Analysis & BI Development (Individual Contributor) : ○ Design and develop dashboards, reports, and data models using BI tools (e.g., Metabase, Power BI, Tableau, Looker). ○ Write complex SQL queries to extract, clean, and analyze data from multiple sources. ○ Conduct in-depth analyses of product usage, user behavior, and model performance metrics, turning data into clear, actionable insights. 2. Team Mentoring : ○ Mentor and develop a small team of data analysts/BI developers, providing guidance on best practices and career growth. ○ Foster a collaborative team culture that values learning, innovation, and continuous improvement. 3. Analytics Strategy & Execution ○ Partner with Product, Engineering, Finance, and other teams to identify key analytics use cases, define success metrics, and track performance. ○ Develop and maintain a roadmap for analytics projects, balancing short-term requests with long-term strategic initiatives. ○ Champion a data-driven culture across the company, promoting the use of analytics tools and methodologies. 4. Data Infrastructure & Governance ○ Collaborate closely with Data Engineering to ensure the data warehouse and data pipelines are efficient, scalable, and reliable. ○ Advocate for and implement best practices in data quality, security, and governance. 5. Cross-Functional Collaboration ○ Work with Finance to provide accurate usage data for billing and revenue operations. ○ Support Customer Success with ad-hoc analysis requests and custom reporting for key clients. ○ Partner with Product and Engineering teams to refine data collection and improve product analytics capabilities. 6. Stakeholder Communication ○ Present analyses, findings, and recommendations to leadership in both technical and non-technical terms. ○ Serve as the subject matter expert on analytics tools, methods, and data interpretation for internal stakeholders. Required Qualifications ● Education: ○ Bachelor’s or Master’s degree in a quantitative field (e.g., Computer Science, Statistics, Mathematics, or related). ● Experience: ○ 5+ years of experience in data analytics and business intelligence ○ 1+ years of experience in a leadership or mentoring capacity (formal or informal). ○ Hands-on experience building dashboards and writing complex SQL queries. ● Technical Skills: ○ Proficiency with BI tools (e.g., Metabase, Power BI, Tableau, Looker) for data visualization and dashboard development. ○ Strong SQL skills and familiarity with data warehousing concepts. ○ Understanding of data modeling, ETL/ELT pipelines, and cloud-based data platforms (e.g., Redshift, Snowflake, BigQuery). ○ Nice-to-have: Experience with Python/R for advanced analytics or machine learning. ● Soft Skills: ○ Excellent communication and presentation skills; able to translate complex data findings into clear insights for varied audiences. ○ Strong leadership qualities, with a proven ability to mentor junior team members. ○ Organizational and project management skills to handle multiple initiatives in a fast-paced environment. Interested candidates can send their resumes on td@rwindia.co

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Position Summary The GPS Data & Analytics Software Engineer I role is accountable for developing data solutions. The role will be accountable for developing the pipelines for the data enablement projects, production/application support and enhancements. Additional responsibilities include data analysis, data operations process and tools, data cataloguing, and developing data SME skills in Global Product Development and Supply - Analytics and AI Enablement organization. Key Responsibilities The Data Engineer will be responsible for designing, building, delivering and maintaining high quality data products and analytic ready data solutions for GPS Cell Therapy Develop cloud-based (AWS) data pipelines using DBT and Glue Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and stake holders to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and protection through validation, testing, and security protocols Implement and maintain security protocols to protect sensitive data Stay up to date with emerging trends and technologies in data engineering, analytical engineering and analytics and adapt with new technologies Participate in the analysis, design, build, manage, and operate lifecycle of the enterprise data lake and analytics focused digital capabilities Work in agile environment Debugging issues on the go Understanding the existing models if required and taking it forward Using JIRA for effort estimation, task tracking and communication about the task Using GIT for version control, quality checks and reviews Proficient Python, Spark, SQL, AWS Redshift, DBT, AWS S3, Glue/Glue Studio, Athena, IAM, other Native AWS Service familiarity with Domino/data lake principles Good to have knowledge/hands on experience in React Js for creating analytics dashboard if required Good to have knowledge about AWS Cloud Formation Templates Partner with other data, platform, and cloud teams to identify opportunities for continuous improvements Required 1-3 years of experience in information technology field in developing AWS cloud native data lakes and ecosystems Understanding of cloud technologies preferably AWS and related services in delivering and supporting data and analytics solutions/data lakes Should have working knowledge of GIT and version control good practices Proficient in Python, Spark, SQL, AWS Services Good to have experience in React.js and Full stack technologies Good to have worked in agile development environment and have used JIRA or similar task tracking & management tools Good to have experience/knowledge of working with DBT Knowledge of data security and privacy best practices Ideal Candidates Would Also Have Prior experience in global life sciences especially in the GPS functional area will be a plus Experience working internationally with a globally dispersed team including diverse stakeholders and management of offshore technical development team(s) Strong communication and presentation skills Other Qualifications Bachelor's degree in computer science, Information Systems, Computer Engineering or equivalent is preferred If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies