Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 8.0 years
6 - 10 Lacs
Thane
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Ajmer
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 2 weeks ago
3.0 - 6.0 years
6 - 15 Lacs
Pune
Work from Office
Sr. Software Engineer with advanced Python for product development and of ML & Generative AI. Hands on with FastAPI server in production environment. AI Engineers to design and develop high-quality Generative AI platform on AWS.
Posted 2 weeks ago
2.0 - 7.0 years
3 - 7 Lacs
Thane, Navi Mumbai, Mumbai (All Areas)
Work from Office
Job Title: Data Analyst/Engineer Location: Mumbai Experience: 3-4 Years Job Summary: We are seeking a skilled Data Analyst/Engineer with expertise in AWS S3 and Python to manage and process large datasets in a cloud environment. The ideal candidate will be responsible for developing efficient data pipelines, managing data storage, and optimizing data workflows in AWS. Your role will involve using your Python skills to automate data tasks. Key Responsibilities: Python Scripting and Automation: • Develop Python scripts for automating data collection, transformation, and loading into cloud storage systems. • Create robust ETL pipelines to move data between systems and perform data transformation. • Use Python for interacting with AWS services, including S3 and other AWS resources. Data Workflow Optimization: • Design and implement efficient data workflows and pipelines in the AWS cloud environment. • Monitor and optimize data processing to ensure quick and accurate delivery of datasets. • Work closely with other teams to integrate data from various sources into S3 for analysis and reporting. Cloud Services & Data Integration: • Leverage other AWS services (e.g., Lambda, EC2, RDS) to manage and process data in a scalable. • Integrate data sources through APIs, ensuring real-time availability of critical data. Required Skills & Qualifications: • Technical Expertise: Strong experience managing and working with AWS S3 buckets and other AWS services. Advanced proficiency in Python, including experience with libraries such as boto3, Pandas, and others. Hands-on experience building and maintaining ETL pipelines for large datasets • Cloud Technologies: Solid understanding of AWS cloud architecture, including S3, Lambda, and EC2. Experience with AWS IAM (Identity and Access Management) for securing S3 buckets. • Problem Solving & Automation: Proven ability to automate data workflows using Python. Strong analytical and problem-solving skills, with a focus on optimizing data storage and processing. Preferred Qualifications: • Bachelors degree in Computer Science, Data Engineering. • Experience with other AWS services, such as Glue, Redshift, or Athena.
Posted 2 weeks ago
5.0 - 8.0 years
6 - 10 Lacs
Mumbai, Maharashtra, India
On-site
Responsibilities: Deploy, manage and troubleshoot AWS services including VPCs, Subnets, Security Groups, IAM, Route 53, and other networking components to ensure high availability, performance, and security. Provide in-depth support for AWS Glue, Athena, and Lambda , troubleshooting and optimizing data processing and serverless architectures. Leverage AWS networking expertise to design, implement, and optimize connectivity solutions, including Transit Gateways, VPC Peering, Direct Connect, and VPN. Monitor and maintain Kubernetes clusters running on AWS EKS, ensuring seamless container orchestration, scaling, and security. Develop automation scripts using Python (Boto3) for AWS infrastructure management and operational tasks such as provisioning resources, configuring security settings, and monitoring applications. Utilize Terraform for Infrastructure as Code (IaC) to automate the deployment, management, and scaling of cloud infrastructure. Optimize and manage RDS instances , including routine tasks such as backups, performance tuning, and security updates. Perform SQL queries to troubleshoot database-related issues and optimize data retrieval from RDS and other SQL-based databases. Collaborate with development, security, and operations teams to ensure infrastructure changes, deployments, and updates are smoothly integrated with minimal disruption. Create and maintain detailed documentation of infrastructure designs, procedures, troubleshooting guidelines, and best practices. Stay up to date with AWS best practices, tools, and new services to improve infrastructure performance and cost-efficiency. Required Skills and Qualifications: 5+ years of hands-on experience working with AWS cloud infrastructure, with a focus on AWS Networking (VPC, Subnets, Route 53, etc). Proficient in managing AWS services including Athena, Glue, Lambda, CloudFront , and RDS . Expertise in Kubernetes and EKS for container orchestration and management. Strong experience in Python scripting using Boto3 for automating AWS services. Proficient in Terraform for managing and deploying cloud infrastructure using Infrastructure as Code (IaC). Good understanding of SQL for performing queries and troubleshooting database performance issues. Familiarity with AWS security best practices , IAM roles, and policies. Ability to analyze and troubleshoot complex networking, cloud, and infrastructure issues in a fast-paced environment. Excellent communication and documentation skills, with the ability to work collaboratively across teams.
Posted 2 weeks ago
6.0 - 9.0 years
2 - 4 Lacs
Chennai, Tamil Nadu, India
On-site
Responsibilities : Design and development of Java applications for SPDJI web sites and its feeder systems. Participate in multiple software development processes including Coding, Testing, De-bugging & Documentation. Develop software applications based on clear business specifications. Work on new initiatives and support existing Index applications. Perform Application & System Performance tuning and troubleshoot performance issues. Develop web based applications and build rich front-end user interfaces. Build applications with object oriented concepts and apply design patterns. Integrate in-house applications with various vendor software platforms. Setup development environment / sandbox for application development. Check-in application code changes into the source repository. Perform unit testing of application code and fix errors. Interface with databases to extract information and build reports. Effectively interact with customers, business users and IT staff. What were looking for : Basic Qualification : Bachelor's degree in Computer Science, Information Systems or Engineering is required, or in lieu, a demonstrated equivalence in work experience. (6 to 9) years of IT experience in application development and support. Strong Experience with Java, J2EE, JMS &.EJBs Advanced SQL & basic PL/SQL programming Basic networking knowledge / Unix scripting Exposure to UI technologies like react JS Basic understanding of AWS cloud (EC2, EMR, Lambda, S3, Glue, etc.) Excellent communication and interpersonal skills are essential, with strong verbal and writing proficiencies. Preferred Qualification : Experience working with large datasets in Equity, Commodities, Forex, Futures and Options asset classes. Experience with Index/Benchmarks or Asset Management or Trading platforms. Basic Knowledge of User Interface design & development using JQuery, HTML5 & CSS.
Posted 2 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Noida
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :The ideal candidate will work in a team environment that demands technical excellence, whose members are expected to hold each other accountable for the overall success of the end product. Focus for this team is on the delivery of innovative solutions to complex problems, but also with a mind to drive simplicity in refining and supporting of the solution by others About The Role :& Responsibilities: Be accountable for delivery of business functionality. Work on the AWS cloud to migrate/re-engineer data and applications from on premise to cloud. Responsible for engineering solutions conformant to enterprise standards, architecture, and technologies Provide technical expertise through a hands-on approach, developing solutions that automate testing between systems. Perform peer code reviews, merge requests and production releases. Implement design/functionality using Agile principles. Proven track record of quality software development and an ability to innovate outside of traditional architecture/software patterns when needed. A desire to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. Be entrepreneurial, business minded, ask smart questions, take risks, and champion new ideas. Take ownership and accountability. Experience Required:3 to 5 years of experience in application program development Experience Desired: Knowledge and/or experience with healthcare information domains. Documented experience in a business intelligence or analytic development role on a variety of large-scale projects. Documented experience working with databases larger than 5TB and excellent data analysis skills. Experience with TDD/BDD Experience working with SPARK and real time analytic frameworks Education and Training Required:Bachelor's degree in Engineering, Computer Science Primary Skills: PYTHON, Databricks, TERADATA, SQL, UNIX, ETL, Data Structures, Looker, Tableau, GIT, Jenkins, RESTful & GraphQL APIs. AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM Additional Skills: Ability to rapidly prototype and storyboard/wireframe development as part of application design. Write referenceable and modular code. Willingness to continuously learn & share learnings with others. Ability to communicate design processes, ideas, and solutions clearly and effectively to teams and clients. Ability to manipulate and transform large datasets efficiently. Excellent troubleshooting skills to root cause complex issues Qualifications 15 years full time education
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer to join our team in Ban/Hyd/Chn/Gur/Noida, Karn?taka (IN-KA), India (IN). 5 years experience in Spark Scala Sqoop Github SQL AWS Services: EMR, S3, LakeFormation, Glue, Athena, Lambda, Step Functions ControlM Cloudera services: hdfs, Hive, Impala Confluence, Jira, ServiceNow About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
6.0 - 9.0 years
19 - 25 Lacs
Noida, Hyderabad, Chennai
Hybrid
Role & responsibilities Proficient in Python for data manipulation and workflow automation. To have strong Knowledge in SQL Server & SSRS Reporting Tool, Able to modify existing reports in HTML Reporting and any other open-source reporting Tool is an added advantage. Having knowledge of Postgres is an added advantage. Developing new reports using existing data sources and models or creating entirely new data models from scratch • Writing SQL code to query databases to retrieve data for analysis. Designing new reports from scratch or modifying existing templates to meet business needs. • Creating custom reports that display specific data in a format requested by users. Analyzing data to identify trends or patterns that may impact business operations. • Researching new data sources and methods to improve reporting capabilities. Presenting results in visual formats such as charts and graphs Experience with ETL tools, such as SSIS Ability to develop reports from multiple data sources. Assist in the development of data visualization standards and best practices. Collaborate with other developers, analysts, and stakeholders to understand reporting and data need Experience working with cloud platforms, especially AWS (S3, Glue, Lambda, EMR, etc.) Preferred candidate profile Looking for immediate joiners minimum 15days Hiring for Hyderabad/Chennai/Noida/Pune locations
Posted 2 weeks ago
6.0 - 10.0 years
0 - 2 Lacs
Gurugram
Remote
We are seeking an experienced AWS Data Engineer to join our dynamic team. The ideal candidate will have hands-on experience in building and managing scalable data pipelines on AWS, utilizing Databricks, and have a deep understanding of the Software Development Life Cycle (SDLC) and will play a critical role in enabling our data architecture, driving data quality, and ensuring the reliable and efficient flow of data throughout our systems. Required Skills: 7+ years comprehensive experience working as a Data Engineer with expertise in AWS services (S3, Glue, Lambda etc.). In-depth knowledge of Databricks, pipeline development, and data engineering. 2+ years of experience working with Databricks for data processing and analytics. Architect and Design the pipeline - e.g. delta live tables Proficient in programming languages such as Python, Scala, or Java for data engineering tasks. Experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Experience with ETL/ELT tools and processes in a cloud environment. Familiarity with Big Data processing frameworks (e.g., Apache Spark). Experience with data modeling, data warehousing, and building scalable architectures. Understand/implement security aspects - consume data from different sources Preferred Qualifications: Experience with Apache Airflow or other workflow orchestration tools, Terraform , python, spark will be preferred AWS Certified Solutions Architect, AWS Certified Data Analytics Specialty, or similar certifications.
Posted 2 weeks ago
4.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
ABOUT EVERNORTH: Evernorthexists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care,we solve the problems others don’t, won’t or can’t. Position Overview Cigna,a leading Health Services company, is looking for data engineers/developers in our Data & Analytics organization. The Full Stack Engineer is responsible for the delivery of a business need end-to end starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a Full Stack Engineer, among others, is Ownership & Accountability. In addition to Delivery, the Full Stack Engineer should have an automation first and continuous improvement mindset. He/She should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Behaviours of a Full Stack Engineer Full Stack Engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers – not institutionalized developer Responsibilities Minimize "meetings" to get requirements and have direct business interactions Write referenceable & modular code Design and architect the solution independently Be fluent in particular areas and have proficiency in many areas Have a passion to learn Take ownership and accountability Understands when to automate and when not to Have a desire to simplify Be entrepreneurial / business minded Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact Take risks and champion new ideas Qualifications Experience Required 3 - 5 years being part of Agile teams 3 - 5 years of scripting 2+ years of AWS Hand on (S3, Lamda) 2+ years of experience with Pyspark or Python 2+ Experience with cloud technologies such as AWS. 2+ years of hand on with SQL Experience Desired: Experience with GITHUB Teradata, AWS (Glue, Lamda), Databricks, Snowflake, Angular, Rest API, Terraform, Jenkins (Cloudbees, Jenkinsfile/Groovy, password valt) Education and Training Required: Knowledge and/or experience with Health care information domains is a plus Computer science – Good to have Primary Skills: JavaScript, Python, PySpark, TDV, R, Ruby, Perl Lambdas, S3, EC2 Databricks, Snowflakes, Jenkins, Kafka, API Language, Angular, Selenium, AI & Machine Learning Additional Skills: Excellent troubleshooting skills Strong communication skills Fluent in BDD and TDD development methodologies Work in an agile CI/CD environment (Jenkins experience a plus) Location & Hours of Work Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate Primarily based in the Innovation Hub in Hyderabad, India in a hybrid working model (3 days WFO and 2 days WAH)Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 2 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS ,Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL,Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance)
Posted 2 weeks ago
4.0 - 8.0 years
9 - 19 Lacs
Gurugram, Chennai, Bengaluru
Work from Office
Skills: AWS Glue, Lambda, PySpark, Python , SQL
Posted 2 weeks ago
3.0 - 6.0 years
4 - 9 Lacs
Chennai
Work from Office
**Position Overview:** We are seeking an experienced AWS Cloud Engineer with a robust background in Site Reliability Engineering (SRE). The ideal candidate will have 3 to 6 years of hands-on experience managing and optimizing AWS cloud environments with a strong focus on performance, reliability, scalability, and cost efficiency. **Key Responsibilities:** * Deploy, manage, and maintain AWS infrastructure, including EC2, ECS Fargate, EKS, RDS Aurora, VPC, Glue, Lambda, S3, CloudWatch, CloudTrail, API Gateway (REST), Cognito, Elasticsearch, ElastiCache, and Athena. * Implement and manage Kubernetes (K8s) clusters, ensuring high availability, security, and optimal performance. * Create, optimize, and manage containerized applications using Docker. * Develop and manage CI/CD pipelines using AWS native services and YAML configurations. * Proactively identify cost-saving opportunities and apply AWS cost optimization techniques. * Set up secure access and permissions using IAM roles and policies. * Install, configure, and maintain application environments including: * Python-based frameworks: Django, Flask, FastAPI * PHP frameworks: CodeIgniter 4 (CI4), Laravel * Node.js applications * Install and integrate AWS SDKs into application environments for seamless service interaction. * Automate infrastructure provisioning, monitoring, and remediation using scripting and Infrastructure as Code (IaC). * Monitor, log, and alert on infrastructure and application performance using CloudWatch and other observability tools. * Manage and configure SSL certificates with ACM and load balancing using ELB. * Conduct advanced troubleshooting and root-cause analysis to ensure system stability and resilience. **Technical Skills:** * Strong experience with AWS services: EC2, ECS, EKS, Lambda, RDS Aurora, S3, VPC, Glue, API Gateway, Cognito, IAM, CloudWatch, CloudTrail, Athena, ACM, ELB, ElastiCache, and Elasticsearch. * Proficiency in container orchestration and microservices using Docker and Kubernetes. * Competence in scripting (Shell/Bash), configuration with YAML, and automation tools. * Deep understanding of SRE best practices, SLAs, SLOs, and incident response. * Experience deploying and supporting production-grade applications in Python (Django, Flask, FastAPI), PHP (CI4, Laravel), and Node.js. * Solid grasp of CI/CD workflows using AWS services. * Strong troubleshooting skills and familiarity with logging/monitoring stacks.
Posted 2 weeks ago
9.0 - 14.0 years
25 - 40 Lacs
Bengaluru
Hybrid
Greetings from tsworks Technologies India Pvt . We are hiring for Sr. Data Engineer - Snowflake with AWS If you are interested, please share your CV to mohan.kumar@tsworks.io Position: Senior Data Engineer Experience: 9+ Years Location: Bengaluru, India (Hybrid) Mandatory Required Qualification Strong proficiency in AWS data services such as S3 buckets, Glue and Glue Catalog, EMR, Athena, Redshift, DynamoDB, Quick Sight, etc. Strong hands-on experience building Data Lake-House solutions on Snowflake, and using features such as streams, tasks, dynamic tables, data masking, data exchange etc. Hands-on experience using scheduling tools such as Apache Airflow, DBT, AWS Step Functions and data governance products such as Collibra Expertise in DevOps and CI/CD implementation Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the AWS cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using PySpark (AWS Glue, EMR or Databricks), Snowflake's data processing capabilities, or other relevant tools. Hands-on experience working with Data Lake solutions such as Apache Hudi, Delta Lake or Iceberg. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Skills & Knowledge Bachelor's degree in computer science, Engineering, or a related field. 9 + Years of experience in Information Technology, designing, developing and executing solutions. 4+ Years of hands-on experience in designing and executing data solutions on AWS and Snowflake cloud platforms as a Data Engineer. Strong proficiency in AWS services such as Glue, EMR, Athena, Databricks, with file formats such as Parquet and Avro. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Hands-on experience in handling real-time data streams from Kafka or Kinesis is required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Knowledge of data quality, governance, and security best practices. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. AWS and Snowflake Certifications are preferred.
Posted 2 weeks ago
3.0 - 8.0 years
10 - 20 Lacs
Chennai
Hybrid
Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Chennai, India
Posted 2 weeks ago
6.0 - 11.0 years
13 - 18 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead a team to design, develop, test, deploy, maintain and continuously improve software Mentor the engineering team to develop and perform as highly as possible Guide and help the team adopt best engineering practices Support driving modern solutions to complex problems Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 7+ years of Overall IT experience 3+ Years of experience - AWS (all services needed for Big Data pipelines like S3 , EMR , SNS/SQS , Eventbridge, Lambda , Cloudwatch , MSK , Glue, Container services etc.) . Spark . Scala, Hadoop 2+ Years of experience - Python. Shell scripting, Orchestration (Airflow or MWAA preferred). SQL. CI/CD (Git preferred and experience with Deployment pipelines), Devops (including supporting production stack and working with SRE teams) 1+ Years of experience - Infrastructure as code (Terraform preferred) 1+ Years of experience - Spark streaming Healthcare Domain & Data Standards Preferred Qualification Azure, Big Data and/or Cloud certifications At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 weeks ago
6.0 - 11.0 years
16 - 20 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. The Optum Technology Digital team is on a mission to disrupt the healthcare industry, transforming UHG into an industry-leading Consumer brand. We deliver hyper-personalized digital solutions that empower direct-to-consumer, digital-first experiences, educating, guiding, and empowering consumers to access the right care at the right time. Our mission is to revolutionize healthcare for patients and providers by delivering cutting-edge, personalized and conversational digital solutions. We’re Consumer Obsessed, ensuring they receive exceptional support throughout their healthcare journeys. As we drive this transformation, we're revolutionizing customer interactions with the healthcare system, leveraging AI, cloud computing, and other disruptive technologies to tackle complex challenges. Serving UnitedHealth Group's digital technology needs, the Consumer Engineering team impacts millions of lives through UnitedHealthcare & Optum. We are seeking a dynamic individual who embodies modern engineering culture - someone with deep engineering expertise within a digital product model, a passion for innovation, and a relentless drive to enhance the consumer experience. Our ideal candidate thrives in an agile, fast-paced rapid-prototyping environment, embraces DevOps and continuous integration/continuous deployment (CI/CD) practices, and champions the Voice of the Customer. If you are driven by the pursuit of excellence, eager to innovate, and excited to make a tangible impact within a team that embraces modern technologies and consumer-centric strategies, while prioritizing robust cyber-security protocols, we invite you to explore this exciting opportunity with us. Join our team and be at the forefront of shaping the future of healthcare, where your unique skills will not only be recognized but celebrated. Primary Responsibilities Design and implement data models to analyse business, system, and security events for real-time insights and threat detection Conduct exploratory data analysis (EDA) to understand patterns and relationships across large data sets, and develop hypotheses for new model development Develop dashboards and reports to present actionable insights to business and security teams Build and automate near real-time analytics workflows on AWS, leveraging services like Kinesis, Glue, Redshift, and QuickSight Collaborate with AI/ML engineers to develop and validate data features for model inputs Interpret and communicate complex data trends to stakeholders and provide recommendations for data-driven decision-making Ensure data quality and governance standards, collaborating with data engineering teams to build quality data pipelines Develop data science algorithms & generate actionable insights as per platform needs and work closely with cross capability teams throughout solution development lifecycle from design to implementation & monitoring Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B. Tech or Master’s degree or equivalent experience 12+ years of experience in data engineering roles in Data Warehouse 3+ years of experience as a Data Scientist with a focus on building models for analytics and insights in AWS environments Experience with AWS data and analytics services (e.g., Kinesis, Glue, Redshift, Athena, TimeStream) Hands-on experience with statistical analysis, anomaly detection and predictive modelling Proficiency with SQL, Python, and data visualization tools like QuickSight, Tableau, or Power BI Proficiency in data wrangling, cleansing, and feature engineering Preferred Qualifications Experience in security data analytics, focusing on threat detection and prevention Knowledge of AWS security tools and understanding of cloud data security principles Familiarity with deploying data workflows using CI/CD pipelines in AWS environments Background in working with real-time data streaming architectures and handling high-volume event-based data
Posted 2 weeks ago
3.0 - 7.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Job Summary: Synechron is seeking an experienced Senior Data Engineer with expertise in AWS, Apache Airflow, and DBT to design and implement scalable, reliable data pipelines. The role involves collaborating with data teams and business stakeholders to develop data solutions that enable actionable insights and support organizational decision-making. The ideal candidate will bring data engineering experience, demonstrating strong technical skills, strategic thinking, and the ability to work in a fast-paced, evolving environment. Software Requirements: Required: Strong proficiency in AWS services including S3, Redshift, Lambda, and Glue, with proven hands-on experience Expertise in Apache Airflow for workflow orchestration and pipeline management Extensive experience with DBT for data transformation and modeling Solid knowledge of SQL for data querying and manipulation Preferred: Familiarity with Hadoop, Spark, or other big data technologies Experience with NoSQL databases (e.g., DynamoDB, Cassandra) Knowledge of data governance and security best practices within cloud environments Overall Responsibilities: Lead the design, development, and maintenance of scalable and efficient data pipelines and workflows utilizing AWS, Airflow, and DBT Collaborate with data scientists, analysts, and business teams to gather requirements and translate them into technical solutions Optimize Extract, Transform, Load (ETL) processes to enhance data quality, integrity, and timeliness Monitor pipeline performance, troubleshoot issues, and implement improvements to ensure operational excellence Enforce data management, governance, and security protocols across all data flows Mentor junior data engineers and promote best practices within the team Stay current with emerging data technologies and industry trends, recommending innovations for the data ecosystem Technical Skills (By Category): Programming Languages: Essential: SQL, Python (preferred for scripting and automation) Preferred: Spark, Scala, Java (for big data integration) Databases/Data Management: Extensive experience with data warehousing (Redshift, Snowflake, or similar) and relational databases (MySQL, PostgreSQL) Familiarity with NoSQL databases such as DynamoDB or Cassandra is a plus Cloud Technologies: AWS cloud platform, leveraging services like S3, Lambda, Glue, Redshift, and IAM security features Frameworks and Libraries: Apache Airflow, dbt, and related data orchestration and transformation tools Development Tools and Methodologies: Git, Jenkins, CI/CD pipelines, Agile/Scrum environment experience Security Protocols: Knowledge of data encryption, access control, and compliance standards in cloud data engineering Experience Requirements: At least 8 years of professional experience in data engineering or related roles with a focus on cloud ecosystems and big data pipelines Demonstrated experience designing and managing end-to-end data workflows in AWS environments Proven success in collaborating with cross-functional teams and translating business requirements into technical solutions Prior experience mentoring junior engineers and leading data projects is highly desirable Day-to-Day Activities: Develop, deploy, and monitor scalable data pipelines using AWS, Airflow, and DBT Collaborate regularly with data scientists, analysts, and business stakeholders to refine data requirements and deliver impactful solutions Troubleshoot production data pipeline issues to resolve data quality or performance bottlenecks Conduct code reviews, optimize existing workflows, and implement automation to improve efficiency Document data architecture, pipelines, and governance practices for knowledge sharing and compliance Keep abreast of emerging data tools and industry best practices, proposing enhancements to existing systems Qualifications: Bachelor’s degree in Computer Science, Data Science, Engineering, or related field; Master’s degree preferred Professional certifications such as AWS Certified Data Analytics – Specialty or related credentials are advantageous Commitment to continuous professional development and staying current with industry trends Professional Competencies: Strong analytical, problem-solving, and critical thinking skills Excellent communication abilities to effectively liaise with technical and business teams Proven leadership in mentoring team members and managing project deliverables Ability to work independently, prioritize tasks, and adapt to changing business needs Innovative mindset focused on scalable, efficient, and sustainable data solutions
Posted 2 weeks ago
5.0 - 10.0 years
18 - 32 Lacs
Hyderabad
Hybrid
Greetings from AstroSoft Technologies We are Back with Exciting job opportunity for AWS Data Engineer professionals. Join our Growing Team & Explore with us at Hyderabad office (Hybrid- Gachibowli) No.of Openings - 10 Positions Role : AWS Data Engineer Project Domain: USA Client-BFSI, Fintech Experience: 5+ Years Work Location : Hyderabad (Hybrid - Gachibowli) Job Type: Full-Time Company: AstroSoft Technologies (https://www.astrosofttech.com/) Astrosoft is an award-winning company that specializes in the areas of Data, Analytics, Cloud, AI/ML, Innovation, Digital. We have a customer first mindset and take extreme ownership in delivering solutions and projects for our customers and have consistently been recognized by our clients as the premium partner to work with. We bring to bear top tier talent, a robust and structured project execution framework, our significant experience over the years and have an impeccable record in delivering solutions and projects for our clients. Founded in 2004 , Headquarters in Florida, Texas-,USA, Corporate Office - India, Hyderabad Benefits from Astrosoft Technologies H1B Sponsorship (Depends on Project & Performance) Lunch & Dinner (Every day) Health Insurance Coverage- Group Industry Standards Leave Policy Skill Enhancement Certification Hybrid Mode JOB DETAILS: Role: Senior AWS Data Engineer Location : India, Hyderabad, Gachibowli (Vasavi SkyCity) Job Type : Full Time Shift Timings : 12.30 PM to 9.30 PM IST Experience Range - 5+ yrs. Work Mode: Hybrid (Fri & Mon-WFH) Interview Process : 3 Tech Rounds Job Summary: Strong experience and understanding of streaming architecture and development practices using kafka , Kinesis , spark , flink etc, Strong AWS development experience using S3 , SNS , SQS, MWAA ( Airflow ) Glue , DMS and EMR . Strong knowledge of one or more programing languages Python /Java/Scala (ideally Python ) Experience using Terraform to build IAC components in AWS . Strong experience with ETL Tools in AWS ; ODI experience is as plus. Strong experience with Database Platforms: Oracle, AWS Redshift Strong experience in SQL tuning, tuning ETL solutions, physical optimization of databases. Very familiar with SRE concepts which includes evaluating and implementing monitoring and observability tools like Splunk , Data Dog, CloudWatch and other job, log or dashboard concepts for customer support and application health checks. Ability to collaborate with our business partners to understand and implement their requirements. Excellent interpersonal skills and be able to build consensus across teams. Strong critical thinking and ability to think out-of-the box. Self-motivated and able to perform under pressure. AWS certified (preferred) Thanks & Regards Karthik Kumar HR-TAG Lead -India Astrosoft Technologies, Unit 1810, level 18, Vasavi Sky city, Gachibowli, Hyderabad, Telangana 500081. Contact: +91-8712229084 Email: karthik.jangam@astrosofttech.com
Posted 3 weeks ago
5.0 - 10.0 years
20 - 27 Lacs
Hyderabad
Work from Office
Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus
Posted 3 weeks ago
6.0 - 10.0 years
0 - 2 Lacs
Pune, Chennai, Bengaluru
Hybrid
Primay Skills: Python, Pyspark, AWS, Glue Location: Pan India Roles and Responsibilities: Proficient in Python scripting and PySpark for data processing tasks Strong SQL capabilities with hands on experience managing big data using ETL tools like Informatica Experience with the AWS cloud platform and its data services including S3 Redshift Lambda EMR Airflow Postgres SNS and EventBridge Skilled in BASH Shell scripting Understanding of data lakehouse architecture particularly with Iceberg format is a plus Preferred Experience with Kafka and Mulesoft API Understanding of healthcare data systems is a plus Experience in Agile methodologies Strong analytical and problem-solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to align with business needs Implement and optimize data models for efficient querying and reporting Assist in the development and maintenance of data quality checks and monitoring processes Support the creation of data solutions that enable analytical capabilities
Posted 3 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Navi Mumbai
Work from Office
Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS , Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL, Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance
Posted 3 weeks ago
6.0 - 9.0 years
27 - 42 Lacs
Chennai
Work from Office
Data Analyst (Visualisation Engineer) - Skills and Qualifications SQL - Mandatory Proficiency in Tableau, Power BI for data visualization. - Mandatory Strong programming skills in Python, including experience with data analysis libraries . - Mandatory Knowledge of AWS services like S3, Redshift, Glue, and Lambda. – Nice to have Familiarity with orchestration tools like Apache Airflow and AWS Step Functions. – Nice to have Understanding of statistical concepts and methodologies. Excellent communication and presentation skills. Job Summary We are seeking a highly skilled Sr. Developer with 6 to 9 years of experience to join our dynamic team. The ideal candidate will have extensive experience in Tableau API Database and SQL Tableau Cloud and Tableau. or Power BI Report Builder Power BI Service DAX - Power BI MS Power BI Database and SQL. This role is hybrid with day shifts and no travel required. The Sr. Developer will play a crucial role in developing and maintaining our data visualization solutions ensuring data accuracy and providing actionable insights to drive business decisions. Responsibilities Develop and maintain Tableau or power BI reports and dashboards and reports to provide actionable insights. Utilize Tableau API or power BI reports to integrate data from various sources and ensure seamless data flow. Design and optimize database schemas to support efficient data storage and retrieval. Write complex SQL queries to extract manipulate and analyze data. Collaborate with business stakeholders to understand their data needs and translate them into technical requirements. Ensure data accuracy and integrity by implementing data validation and quality checks. Provide technical support and troubleshooting for Tableau- or power BI related issues. Stay updated with the latest Tableau or power BI features and best practices to enhance data visualization capabilities. Conduct performance tuning and optimization of Tableau or power BI dashboards and reports. Train and mentor junior developers on Tableau or power BI and SQL best practices. Work closely with the data engineering team to ensure data pipelines are robust and scalable. Participate in code reviews to maintain high-quality code standards. Document technical specifications and user guides for developed solutions. Qualifications( Tableau) Must have extensive experience with Tableau API and Tableau Cloud. Strong proficiency in Database and SQL for data extraction and manipulation. Experience with Tableau Work Model in a hybrid environment. Excellent problem-solving skills and attention to detail. Ability to collaborate effectively with cross-functional teams. Strong communication skills to convey technical concepts to non-technical stakeholders. Nice to have experience in performance tuning and optimization of Tableau solutions. Qualifications( Power BI) Possess strong expertise in Power BI Report Builder Power BI Service DAX Power BI and MS Power BI. Demonstrate proficiency in SQL and database management. Exhibit excellent problem-solving and analytical skills . Show ability to work collaboratively in a hybrid work model. Display strong communication skills to interact effectively with stakeholders. Have a keen eye for detail and a commitment to data accuracy. Maintain a proactive approach to learning and adopting new technologies.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2