Jobs
Interviews

522 Emr Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Senior Lead Engineer specializing in Python and Spark within AWS, you will be tasked with designing, building, and maintaining robust, scalable, and efficient ETL pipelines. Your primary focus will be ensuring alignment with the data lakehouse architecture on AWS. You will leverage your extensive expertise in Python and Spark to develop and optimize workflows using AWS services such as Glue, Glue Data Catalog, Lambda, and S3. In this role, you will implement data quality and governance frameworks to guarantee reliable and consistent data processing across the platform. Collaborating with cross-functional teams, you will gather requirements, provide technical insights, and deliver high-quality data solutions. Your responsibilities will also include driving the migration of existing data processing workflows to the lakehouse architecture by leveraging Iceberg capabilities. As a key member of the team, you will establish and enforce best practices for coding standards, design patterns, and system architecture. Monitoring and improving system performance and data reliability through proactive analysis and optimization techniques will be essential. Additionally, you will lead technical discussions, mentor team members, and promote a culture of continuous learning and innovation. Your interactions will primarily involve senior management and the architectural group, development managers and team leads, data engineers and analysts, as well as agile team members. Therefore, excellent interpersonal skills, both verbal and written, will be crucial in articulating complex technical solutions to diverse audiences. To excel in this role, you must possess a consistent track record of designing and implementing complex data processing workflows using Python and Spark. Strong experience with AWS services such as Glue, Glue Data Catalog, Lambda, S3, and EMR is essential, with a focus on data lakehouse solutions. Deep understanding of data quality frameworks, data contracts, and governance standard processes will also be required. Furthermore, the ability to design and implement scalable, maintainable, and secure architectures using modern data technologies is crucial. Hands-on experience with Apache Iceberg and its integration within data lakehouse environments, along with expertise in problem-solving and performance optimization for data workflows, will be key skills for success in this role. Desirable skills include familiarity with additional programming languages such as Java, experience with serverless computing paradigms, and knowledge of data visualization or reporting tools for stakeholder communication. Certification in AWS or data engineering (e.g., AWS Certified Data Analytics, Certified Spark Developer) would be advantageous. A bachelor's degree in Computer Science, Software Engineering, or a related field is helpful, although equivalent professional experience or certifications will also be considered. By joining our dynamic organization at LSEG, you will have the opportunity to contribute to driving financial stability, empowering economies, and enabling sustainable growth, all while being part of a collaborative and creative culture that values diversity and sustainability.,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 17 Lacs

Bengaluru

Work from Office

Position: Contract to Hire(C2H) Role: Cloud Engineer(AWS) Experience: 4+ Years Location: Remote Notice Period: Immediate Joiners. What youll bring. AWS expertise in services such as EC2 / Lamda and EMR and config of those services using terraform to config infrastructure Big Data Hadoop - Spark/Scala solid experience Control-m solid experience - Been able to create jobs, modify parameters Worked with Code Repositories as GIT, Maven, ... DevOps Knowledge (Jenkins, Scripts, ...) - Tools used for deploying software into environments, use of Jira. It would also be nice for you to have. Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenario SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL Documentation - Experience of carrying out data and process analysis to create specifications documents Finance Knowledge - Have a experience working in Financial Services / Banking organisation with an understanding of Financial Services / Retail, Business and Corporate Banking

Posted 2 weeks ago

Apply

2.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Amazon Web Services (AWS) Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive successful project outcomes. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring adherence to best practices in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application development processes to increase efficiency. Professional & Technical Skills: Primary:AWS + Python Secondary:Devops, TerraformGood To Have:AWS CDK8-10 Years of overall software development experience with 5 years in AWS + 3 years on Python.Hands on experience on AWS services EC2, Lambda, SNS, SQS, Glue, Step function, Cloud watch, API Gateway, EMR, S3, Dynamo DB, RDS, Athena.Hands on experience in writing Python code for AWS services like Glue job, Lambda and AWS CDK.Strong technical and Debugging hands on.2+ years of Devops experience in Terraform, Git and CI/CD.Experience working in Agile development environments.Strong verbal and written communication skills, with the ability to engage directly with clients. Additional Information:- The candidate should have minimum 5 years of experience in Amazon Web Services (AWS).- This position is based at our Bengaluru office.- A 15 years full time education is required.- Shift Timing:12:30 PM to 9:30 PM IST [Weekdays] Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Business Requirements Analysis, EPIC Systems Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to gather requirements, and overseeing the development process to ensure alignment with business objectives. You will also engage in problem-solving activities, providing guidance and support to your team members while ensuring that the applications meet the highest standards of quality and functionality. Your role will be pivotal in driving the success of projects and fostering a collaborative environment. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training sessions to enhance team skills and knowledge.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Business Requirements Analysis, EPIC Systems.- Strong analytical skills to assess business needs and translate them into technical requirements.- Experience in stakeholder management and effective communication.- Ability to lead cross-functional teams and drive project success.- Familiarity with application design and development methodologies. Additional Information:- The candidate should have minimum 5 years of experience in Business Requirements Analysis.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

15 - 30 Lacs

Pune

Remote

3 to 15 years exp only evening & Night shift Key skillz: MS4,sunrise, Altera, Touchworks, Allscripts,Sunrise Clinical manager (all must have) Notice period 0-60days Provide day-to-day app support for Sunrise Clinical Manager modules (SCM) OPEN CTC Required Candidate profile Provide day-to-day application support for Sunrise Clinical Manager modules (e.g., documentation, orders, medication administration).

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 2 weeks ago

Apply

3.0 - 5.0 years

30 - 32 Lacs

India, Bengaluru

Work from Office

Job Title : Data Engineer (DE) / SDE Data Location Bangalore Experience range 3-15 What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad, Ahmedabad

Work from Office

About the Role: Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Masters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to haveNo-SQL (Databricks, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 2 weeks ago

Apply

15.0 - 18.0 years

30 - 40 Lacs

Hyderabad, Pune, Chennai

Hybrid

Experience- 15 to 18 years Total 15 to 18 years. with at-least 3 to 4 years as expertise in ETL, data engineering and Cloud Technologies, with a proven ability to orchestrate cutting-edge technology to connect various applications within the cloud environment in a large development project. Primary Technical Skills ETL, Apache Spark, AWS EMR, EKS, Serverless, Data Engineering, Distributed Computing, Data Lineage, Apache Airflow Java 17+, Springboot/ Quarkus, Hibernate ORM, REST, Postgres or any RDBMS, Microservices, Cloud-native development , Experience in ETL implementation using Cloud technologies, Distributed computing and Big data processing. Orchestrate the integration of Cloud-native principles, Kubernetes, Micro profile specs, Spark framework. Hands-on Java Lead. Strong in OOPs concepts, Java design patterns, Reactive programming, writing High level solutions & Clean architecture. Very strong advocate of coding best practices (SOLID, DRY, Clean Code, Exception handling, TDD, Unit testing, Integration testing). Have implemented common framework for an application/platform (like Exception Library, Security Authentication/Authorization, Auditing, Idempotency, Connectors etc) Experience in implementing HLD, Microservices architecture, design patterns like Resiliency, Service Orchestration, DB per service, CQRS etc. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 2 weeks ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Noida

Remote

Remote | Night Shift Looking for a skilled Project Manager with strong US Healthcare experience. Must be proficient in full-cycle project management and comfortable working night shifts. Send your resume to simran.agrawal1@r4solutionsinc.com ASAP.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Hiring for a FAANG company. Note: This position is open only for women professionals returning to the workforce after a career break (9+ months career gap, e.g, last working day prior to NOV 2024). We encourage you to apply only if you fit this criteria. Position Overview This is a Level 5 Data Engineer role within a leading e-commerce organization's Selling Partner Services division in India. The position focuses on building and scaling API authorization and customization systems that serve thousands of global selling partners. This is a senior-level position requiring significant technical expertise and leadership capabilities. Team Context & Mission Organization : Selling Partner Services division Focus : API authorization and customization systems for global selling partners Mission : Create flexible, reliable, and extensible API solutions to help businesses thrive on the platform Culture : Startup excitement with enterprise-level resources and scale Impact : Direct influence on thousands of global selling partners Key Responsibilities Technical Leadership Lead design and implementation of complex data pipelines and ETL processes Architect scalable, high-performance data systems using cloud technologies and big data platforms Evaluate and recommend new technologies and tools for data infrastructure enhancement Troubleshoot and resolve complex data-related issues in production environments Collaboration & Stakeholder Management Work closely with data scientists, analysts, and business stakeholders Understand data requirements and implement appropriate solutions Contribute to data governance policies and procedures development Performance & Quality Optimization Optimize data storage and retrieval systems for performance and cost-effectiveness Implement data quality checks and monitoring systems Ensure data integrity and reliability across all systems Mentorship & Leadership Mentor junior engineers on the team Provide technical leadership on data engineering best practices and methodologies Drive adoption of industry standards and innovative approaches Required Qualifications (Must-Have) Experience Requirements 5+ years of data engineering experience - Senior-level expertise expected 5+ years of SQL experience - Advanced SQL skills for complex data manipulation Data modeling, warehousing, and ETL pipeline building - Core competencies Distributed systems knowledge - Understanding of data storage and computing in distributed environments Technical Skills Advanced proficiency in designing and implementing data solutions Strong understanding of data architecture principles Experience with production-level data systems Knowledge of data governance and quality assurance practices Preferred Qualifications Cloud Technology Stack Data Warehousing : Redshift, Snowflake, BigQuery Object Storage : S3, Azure Blob, Google Cloud Storage ETL Services : AWS Glue, Azure Data Factory, Google Dataflow Big Data Processing : EMR, Databricks, Apache Spark Real-time Streaming : Kinesis, Kafka, Apache Storm Data Delivery : FireHose, Apache NiFi Serverless Computing : Lambda, Azure Functions, Google Cloud Functions Identity Management : IAM, Active Directory, role-based access control Non-Relational Database Experience Object Storage : S3, blob storage systems Document Stores : MongoDB, CouchDB Key-Value Stores : Redis, DynamoDB Graph Databases : Neo4j, ArangoDB Column-Family : Cassandra, HBase Key Success Factors Scalability Focus : Building systems that can handle massive enterprise scale Performance Optimization : Continuous improvement of system efficiency Quality Assurance : Maintaining high data quality and reliability standards Innovation : Staying current with emerging technologies and best practices Collaboration : Effective partnership with stakeholders across the organization This role represents a significant opportunity for a senior data engineer to make a substantial impact on a global e-commerce seller ecosystem while working with cutting-edge technologies and leading a team of talented professionals.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

navi mumbai, maharashtra

On-site

Seekify Global is looking for an experienced and motivated Data Catalog Engineer to join the Data Engineering team. The ideal candidate should have a significant background in designing and implementing metadata and data catalog solutions within AWS-centric data lake and data warehouse environments. As a Data Catalog Engineer at Seekify Global, you will play a crucial role in improving data discoverability, governance, and lineage across our enterprise data assets. Your responsibilities will include leading the end-to-end implementation of a data cataloging solution within AWS, establishing and managing metadata frameworks for structured and unstructured data assets, and integrating the data catalog with various AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. You will collaborate closely with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. Additionally, you will be responsible for developing automation scripts for catalog ingestion, lineage tracking, and metadata updates using tools like Python, Lambda, Pyspark, or Glue/EMR custom jobs. Working in coordination with data engineers, data architects, and analysts, you will ensure that metadata is accurate, relevant, and up to date. Implementing role-based access controls and ensuring compliance with data privacy and regulatory standards will also be part of your role. Moreover, you will be expected to create detailed documentation and conduct training/workshops for internal stakeholders on effectively utilizing the data catalog. **Key Responsibilities:** - Lead end-to-end implementation of a data cataloging solution within AWS, preferably AWS Glue Data Catalog or third-party tools like Apache Atlas, Alation, Collibra, etc. - Establish and manage metadata frameworks for structured and unstructured data assets in data lake and data warehouse environments. - Integrate the data catalog with AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. - Collaborate with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. - Develop automation scripts for catalog ingestion, lineage tracking, and metadata updates using Python, Lambda, Pyspark, or Glue/EMR custom jobs. - Work closely with data engineers, data architects, and analysts to ensure metadata is accurate, relevant, and up to date. - Implement role-based access controls and ensure compliance with data privacy and regulatory standards. **Required Skills and Qualifications:** - 7-8 years of experience in data engineering or metadata management roles. - Proven expertise in implementing and managing data catalog solutions within AWS environments. - Strong knowledge of AWS Glue, S3, Athena, Redshift, EMR, Data Catalog, and Lake Formation. - Hands-on experience with metadata ingestion, data lineage, and classification processes. - Proficiency in Python, SQL, and automation scripting for metadata pipelines. - Familiarity with data governance and compliance standards (e.g., GDPR, RBI guidelines). - Experience integrating with BI tools (e.g., Tableau, Power BI) and third-party catalog tools is a plus. - Strong communication, problem-solving, and stakeholder management skills. **Preferred Qualifications:** - AWS Certifications (e.g., AWS Certified Data Analytics, AWS Solutions Architect). - Experience with data catalog tools like Alation, Collibra, or Informatica EDC, or open-source tools hands-on experience. - Exposure to data quality frameworks and stewardship practices. - Knowledge of data migration with data catalog and data-mart is a plus. This is a full-time position with the work location being in person.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Senior Lead Engineer specializing in Python and Spark in AWS, you will be responsible for designing, building, and maintaining robust, scalable, and efficient ETL pipelines. Your primary focus will be on ensuring alignment with data lakehouse architecture on AWS and optimizing workflows using services such as Glue, Lambda, and S3. Collaborating with cross-functional teams, you will gather requirements, provide technical insights, and deliver high-quality data solutions. Your role will involve driving the migration of existing data processing workflows to the lakehouse architecture, leveraging Iceberg capabilities, and enforcing best practices for coding standards and system architecture. You will play a key role in implementing data quality and governance frameworks to ensure reliable and consistent data processing across the platform. Monitoring and improving system performance, optimizing data workflows, and ensuring all solutions are secure, compliant, and meet industry standards will be crucial aspects of your responsibilities. Leading technical discussions, mentoring team members, and fostering a culture of continuous learning and innovation are essential for this role. You will also maintain relationships with senior management, architectural groups, development managers, team leads, data engineers, analysts, and agile team members. Key Skills and Experience: - Extensive expertise in Python and Spark for designing and implementing complex data processing workflows. - Strong experience with AWS services such as Glue, Lambda, S3, and EMR, focusing on data lakehouse solutions. - Deep understanding of data quality frameworks, data contracts, and governance processes. - Ability to design and implement scalable, maintainable, and secure architectures using modern data technologies. - Hands-on experience with Apache Iceberg and its integration within data lakehouse environments. - Expertise in problem-solving, performance optimization, and Agile methodologies. - Excellent interpersonal skills with the ability to communicate complex technical solutions effectively. Desired Skills and Experience: - Familiarity with additional programming languages such as Java. - Experience with serverless computing paradigms. - Knowledge of data visualization or reporting tools for stakeholder communication. - Certification in AWS or data engineering (e.g., AWS Certified Data Analytics, Certified Spark Developer). Education and Certifications: - A bachelor's degree in Computer Science, Software Engineering, or a related field is helpful. - Equivalent professional experience or certifications will also be considered. Join us at LSEG, a leading global financial markets infrastructure and data provider, where you will be part of a dynamic organization across 65 countries. We value individuality, encourage new ideas, and are committed to sustainability, driving sustainable economic growth and inclusivity. Experience the critical role we play in re-engineering the financial ecosystem and creating economic opportunities while accelerating the transition to net zero. At LSEG, we offer tailored benefits including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives.,

Posted 2 weeks ago

Apply

5.0 - 8.0 years

11 - 21 Lacs

Hyderabad

Hybrid

AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar) Required Candidate profile Immediate Joiners Preferred.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

12 - 20 Lacs

Hyderabad

Work from Office

The Data Centre Senior Manager is responsible for overseeing the operations, maintenance, and strategic planning of the hospital’s data center •8+ years of experience in IT/data center management, with at least 3 years in a leadership role.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

6 - 11 Lacs

Noida

Work from Office

5+ years of experience in data engineering with a strong focus on AWS services . Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshift for data warehousing and analytics Proficiency in SQL , Python , or PySpark for data processing. Experience with data modeling , partitioning strategies , and performance optimization . Familiarity with orchestration tools like AWS Step Functions , Apache Airflow , or Glue Workflows . Strong understanding of data lake and data warehouse architectures. Excellent problem-solving and communication skills. Mandatory Competencies Beh - Communication ETL - ETL - AWS Glue Big Data - Big Data - Pyspark Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Programming Language - Python - Python Shell Database - Database Programming - SQL

Posted 2 weeks ago

Apply

8.0 - 12.0 years

10 - 15 Lacs

Pune

Work from Office

Key Responsibilities Design and develop scalable applications using Python and AWS services Debug and resolve production issues across complex distributed systems Architect solutions aligned with business strategies and industry standards Lead and mentor a team of India-based developers; guide career development Ensure technical deliverables meet highest standards of quality and performance Research and integrate emerging technologies and processes into development strategy Document solutions in compliance with SDLC standards using defined templates Assemble large, complex datasets based on functional and non-functional requirements Handle operational issues and recommend improvements in technology stack Facilitate end-to-end platform integration across enterprise-level applications Required Skills Technical Skills Cloud & Architecture Tools & Processes Python AWS (EC2, EKS, Glue, Lambda, S3, EMR, RDS, API Gateway) Terraform, CI/CD pipelines Data Engineering Step Functions, CloudFront EventBridge, ARRFlow, Airflow (MWAA), Quicksight Debugging & Troubleshooting System Integration SDLC, Documentation Templates Qualifications 10+ years of software development experience, preferably in financial/trading applications 5+ years of people management and mentoring experience Proven track record in technical leadership and architecture planning Expertise in developing applications using Python and AWS stack Strong grasp of Terraform and automated CI/CD processes Exceptional multitasking and prioritization capabilities

Posted 2 weeks ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.If you"ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

0 - 1 Lacs

Chennai

Hybrid

Duties and Responsibilities Lead the design and implementation of scalable, secure, and high-performance solutions for data-intensive applications. Collaborate with stakeholders, other product development groups and software vendors to identify and define solutions for complex business and technical requirements. Develop and maintain cloud infrastructure using platforms such as AWS, Azure, or Google Cloud. Articulate technology solutions as well as explain the competitive advantages of various technology alternatives. Evangelize best practices to analytics teams Ensure data security, privacy, and compliance with relevant regulations. Optimize cloud resources for cost-efficiency and performance. Lead the migration of on-premises data systems to the cloud. Implement data storage, processing, and analytics solutions using cloud-native services. Monitor and troubleshoot cloud infrastructure and data pipelines. Stay updated with the latest trends and best practices in cloud computing and data management" Skills 5+ years of hands-on design and development experience in implementing Data Analytics applications using AWS Services such as S3, Glue, AWS Step Functions, Kinesis, Lambda, Lake Formation, Athena, Elastic Container Service/Elastic Kubernetes Service, Elastic Search, and Amazon EMR or Snowflake Experience with AWS services such as AWS IoT Greengrass, AWS IoT SiteWise, AWS IoT Core, AWS IoT Events-Strong understanding of cloud architecture principles and best practices. Proficiency in designing network topology, endpoints, application registration, network pairing Well verse with the access management in Azure or Cloud Experience with containerization technologies like Docker and Kubernetes. Expertise in CI/CD pipelines and version control systems like Git. Excellent problem-solving skills and attention to detail. Strong communication and leadership skills. Ability to work collaboratively with cross-functional teams and stakeholders. Knowledge of security and compliance standards related to cloud data platforms." Technical / Functional Skills Atleast 3+ years of experience in the implementation of all the Amazon Web Services (listed above) Atleast 3+ years of experience as a SAP BW Developer Atleast 3+ years of experience in Snowflake (or Redshift) Atleast 3+ years of experience as Data Integration Developer in Fivetran/HVR/DBT, Boomi (or Talend/Infomatica) Atleast 2+ years of experience with Azure Open AI, Azure AI Services, Microsoft CoPilot Studio, PowerBI, PowerAutomate Experience in Networking and Security Domain Expertise: 'Epxerience with SDLC/Agile/Scrum/Kanban. Project Experience Hands on experience in the end-to-end implementation of Data Analytics applications on AWS Hands on experience in the end to end implementation of SAP BW application for FICO, Sales & Distribution and Materials Management Hands on experience with Fivetran/HVR/Boomi in development of data integration services with data from SAP, SalesForce, Workday and other SaaS applications Hands on experience in the implementation of Gen AI use cases using Azure Services Hands on experience in the implementation of Advanced Analytics use cases using Python/R Certifications AWS Certified Solutions Architect - Professional

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 37 Lacs

Pune

Hybrid

Job Title Data Engineer Job Description Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 5+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 5+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 5+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 5+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 5+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 5+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience

Posted 2 weeks ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.If you"ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Chennai

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As an AWS Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Key Responsibilities: 1. Data Pipeline Design & Development Design and develop scalable, resilient, and secure ETL/ELT data pipelines using AWS services. Build and optimize data workflows leveraging AWS Glue, EMR, Lambda, and Step Functions. Implement batch and real-time data ingestion using Kafka, Kinesis, or AWS Data Streams. Ensure efficient data movement across S3, Redshift, DynamoDB, RDS, and Snowflake. 2. Cloud Data Engineering & Storage Architect and manage data lakes and data warehouses using Amazon S3, Redshift, and Athena. Optimize data storage and retrieval using Parquet, ORC, Avro, and columnar storage formats. Implement data partitioning, indexing, and query performance tuning. Work with NoSQL databases (DynamoDB, MongoDB) and relational databases (PostgreSQL, MySQL, Aurora). 3. Infrastructure as Code (IaC) & Automation Deploy and manage AWS data infrastructure using Terraform, AWS CloudFormation, or AWS CDK. Implement CI/CD pipelines for automated data pipeline deployments using GitHub Actions, Jenkins, or AWS CodePipeline. Automate data workflows and job orchestration using Apache Airflow, AWS Step Functions, or MWAA. 4. Performance Optimization & Monitoring Optimize Spark, Hive, and Presto queries for performance and cost efficiency. Implement auto-scaling strategies for AWS EMR clusters. Set up monitoring, logging, and alerting with AWS CloudWatch, CloudTrail, and Prometheus/Grafana. 5. Security, Compliance & Governance Implement IAM policies, encryption (AWS KMS), and role-based access controls. Ensure compliance with GDPR, HIPAA, and industry data governance standards. Monitor data pipelines for security vulnerabilities and unauthorized access. 6. Collaboration & Stakeholder Engagement Work closely with data analysts, data scientists, and business teams to understand data needs. Document data pipeline designs, architecture decisions, and best practices. Mentor and guide junior data engineers on AWS best practices and optimization techniques. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 7+ years of experience in data engineering with a focus on AWS cloud technologies. Expertise in AWS Glue, Lambda, EMR, Redshift, Kinesis , and Step Functions. Proficiency in SQL, Python, Java and PySpark for data transformations. Strong understanding of ETL/ELT best practices and data warehousing concepts. Experience with Apache Airflow or Step Functions for orchestration. Familiarity with Kafka, Kinesis, or other streaming platforms. Knowledge of Terraform, CloudFormation, and DevOps for AWS. Expertise in data mining, data storage, and Extract-Transform-Load (ETL) processes. Experience in data pipelines development and tooling, such as Glue, Databricks, Synapse, or Dataproc. Experience with both relational and NoSQL databases, including PostgreSQL, DB2, and MongoDB. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously while maintaining attention to detail. Communication skills: Ability to communicate with both technical and non-technical colleagues to derive technical requirements from business needs and problems. Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of cloud and big data platforms. Your role will involve representing the NADP SRE team, working in a dynamic environment, and providing technical leadership in defining and executing the team's technical roadmap. Collaborating with cross-functional teams, including software development, product management, customers, and security teams, is essential. Your contributions will directly impact the success of machine learning (ML) and AI initiatives by ensuring a robust and efficient platform infrastructure aligned with operational excellence. In this role, you will design, build, and optimize cloud and data infrastructure to ensure high availability, reliability, and scalability of big-data and ML/AI systems. Collaboration with cross-functional teams will be crucial in creating secure, scalable solutions that support ML/AI workloads and enhance operational efficiency through automation. Troubleshooting complex technical problems, conducting root cause analyses, and contributing to continuous improvement efforts are key responsibilities. You will lead the architectural vision, shape the team's technical strategy and roadmap, and act as a mentor and technical leader to foster a culture of engineering and operational excellence. Engaging with customers and stakeholders to understand use cases and feedback, translating them into actionable insights, and effectively influencing stakeholders at all levels are essential aspects of the role. Utilizing strong programming skills to integrate software and systems engineering, building core data platform capabilities and automation to meet enterprise customer needs, is a crucial requirement. Developing strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices is also part of the role. Qualifications for this position include 8-12 years of relevant experience and a bachelor's engineering degree in computer science or its equivalent. Candidates should have the ability to design and implement scalable solutions with a focus on streamlining operations. Strong hands-on experience in Cloud, preferably AWS, is required, along with Infrastructure as a Code skills, ideally with Terraform and EKS or Kubernetes. Proficiency in observability tools like Prometheus, Grafana, Thanos, CloudWatch, OpenTelemetry, and the ELK stack is necessary. Writing high-quality code in Python, Go, or equivalent programming languages is essential, as well as a good understanding of Unix/Linux systems, system libraries, file systems, and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure, architecting software and infrastructure at scale, and certifications in cloud and security domains are beneficial qualifications for this role. Cisco emphasizes diversity and encourages candidates to apply even if they do not meet every single qualification. Diverse perspectives and skills are valued, and Cisco believes that diverse teams are better equipped to solve problems, innovate, and create a positive impact.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

At Lilly, you are part of a global healthcare leader that is committed to uniting caring with discovery to enhance the lives of people worldwide. With our headquarters in Indianapolis, Indiana, our dedicated team of 39,000 employees collaborates to discover and deliver life-changing medicines, enhance disease understanding and management, and contribute to our communities through philanthropy and volunteerism. Our focus is on making a positive impact on people's lives around the world. As part of our ongoing efforts, we are in the process of developing and internalizing a cutting-edge recommendation engine platform. This platform aims to streamline sales and marketing operations by analyzing diverse data sources, implementing advanced personalization models, and seamlessly integrating with other Lilly operations platforms. The goal is to provide tailored recommendations to our sales and marketing teams at the individual doctor level, enabling informed decision-making and enhancing customer experience. Responsibilities: - Utilize deep learning models to optimize Omnichannel Promotional Sequences for sales teams - Analyze large datasets to identify trends and relevant information for modeling decisions - Translate business problems into statistical problem statements and propose solution approaches - Collaborate with stakeholders to effectively communicate analysis findings - Preference for familiarity with pharmaceutical datasets and industry - Experience in code refactoring, model training, deployment, testing, and monitoring for drift - Optimize model hyperparameters and adapt to new ML techniques for business problem-solving Qualifications: - Bachelor's degree in Computer Science, Statistics, or related field (preferred) - 2-6 years of hands-on experience with data analysis, coding, and result interpretation - Proficiency in coding languages like SQL or Python - Prior experience with ML techniques for recommendation engine models in healthcare sectors - Expertise in Feature Engineering, Selection, and Model Validation on Big Data - Familiarity with cloud technology, particularly AWS, and tools like Tableau and Power BI At Lilly, we are committed to promoting workplace diversity and providing equal opportunities for all individuals, including those with disabilities. If you require accommodation during the application process, please complete the accommodation request form on our website. Join us at Lilly and be part of a team dedicated to making a difference in the lives of people worldwide.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Pimpri-Chinchwad

Work from Office

Role & responsibilities : 1. Management of Medical Records Maintain and manage patient medical records ensuring accuracy, completeness, and confidentiality. Organize and file medical records systematically for easy retrieval. Ensure all medical records are updated timely with relevant clinical notes, reports, test results, and discharge summaries. Manage both paper-based and electronic health records (EHRs) efficiently. 2. Coding and Classification Supervise or assist with medical coding (ICD-11, CPT) to classify diagnoses and procedures accurately. Ensure coding aligns with hospital billing and insurance requirements . Coordinate with the billing and insurance departments to resolve discrepancies related to coding. 3. Data Accuracy and Quality Control Conduct periodic audits of medical records to maintain data integrity and compliance with regulatory standards (NABH, ISO, JCI). Identify and rectify errors, missing information, or discrepancies in patient files. Ensure compliance with legal and ethical standards governing patient data privacy (e.g., HIPAA-like rules). 4. Reporting & Documentation Prepare and submit periodic reports on medical records activities to hospital administration. Provide statistical data and analysis related to patient admissions, discharges, morbidity, and mortality for research and management purposes. Maintain records of medical records personnel attendance, performance, and training. 5. Supervision and Training Supervise Medical Records Department (MRD) staff including coders, record technicians, and clerks. Conduct or arrange training programs to update staff on new coding standards, software, and regulatory changes. Ensure smooth workflow and task delegation in the MRD. 6. Coordination Coordinate with clinical departments, nursing, billing, and IT teams for smooth exchange of medical information. Support medical audit and research activities by providing required data and documentation. Liaise with external bodies for compliance, accreditation, and legal matters related to medical records. 7. System Maintenance and Upgradation Assist in implementing and upgrading Hospital Information Systems (HIS) and Electronic Medical Records (EMR) systems. Facilitate smooth migration of paper records to digital formats when applicable. Ensure backup and disaster recovery procedures for medical records. 8. Compliance and Confidentiality Maintain strict confidentiality and safeguard patient privacy at all times. Ensure adherence to hospital policies and governmental laws related to health information management.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies