Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
6 - 10 Lacs
Chennai
Work from Office
As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 weeks ago
7.0 - 12.0 years
10 - 14 Lacs
Gurugram
Work from Office
Company Overview Incedo is a US-based consulting, data science and technology services firm with over 3000 people helping clients from our six offices across US, Mexico and India. We help our clients achieve competitive advantage through end-to-end digital transformation. Our uniqueness lies in bringing together strong engineering, data science, and design capabilities coupled with deep domain understanding. We combine services and products to maximize business impact for our clients in telecom, Banking, Wealth Management, product engineering and life science & healthcare industries. Working at Incedo will provide you an opportunity to work with industry leading client organizations, deep technology and domain experts, and global teams. Incedo University, our learning platform, provides ample learning opportunities starting with a structured onboarding program and carrying throughout various stages of your career. A variety of fun activities is also an integral part of our friendly work environment. Our flexible career paths allow you to grow into a program manager, a technical architect or a domain expert based on your skills and interests. Our Mission is to enable our clients to maximize business impact from technology by Harnessing the transformational impact of emerging technologies Bridging the gap between business and technology Role Description As an AWS Data Engineer, your role will be to design, develop, and maintain scalable data pipelines on AWS. You will work closely with technical analysts, client stakeholders, data scientists, and other team members to ensure data quality and integrity while optimizing data storage solutions for performance and cost-efficiency. This role requires leveraging AWS native technologies and Databricks for data transformations and scalable data processing. Technical Skills Responsibilities Lead and support the delivery of data platform modernization projects. Design and develop robust and scalable data pipelines leveraging AWS native services. Optimize ETL processes, ensuring efficient data transformation. Migrate workflows from on-premise to AWS cloud, ensuring data quality and consistency. Design automations and integrations to resolve data inconsistencies and quality issues Perform system testing and validation to ensure successful integration and functionality. Implement security and compliance controls in the cloud environment. Ensure data quality pre- and post-migration through validation checks and addressing issues regarding completeness, consistency, and accuracy of data sets. Collaborate with data architects and lead developers to identify and document manual data movement workflows and design automation strategies. Nice-to-have skills Qualifications 7+ years experience with a core data engineering skillset leveraging AWS native technologies (AWS Glue, Python, Snowflake, S3, Redshift). Experience in the design and development of robust and scalable data pipelines leveraging AWS native services. Proficiency in leveraging Snowflake for data transformations, optimization of ETL pipelines, and scalable data processing. Experience with streaming and batch data pipeline/engineering architectures. Familiarity with DataOps concepts and tooling for source control and setting up CI/CD pipelines on AWS. Hands-on experience with Databricks and a willingness to grow capabilities. Experience with data engineering and storage solutions (AWS Glue, EMR, Lambda, Redshift, S3). Strong problem-solving and analytical skills. Knowledge of Dataiku is needed Graduate/Post-Graduate degree in Computer Science or a related field.
Posted 2 weeks ago
3.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Mumbai
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Responsibilities: Design and implement the data modeling, data ingestion and data processing for various datasets Design, develop and maintain ETL Framework for various new data source Develop data ingestion using AWS Glue/ EMR, data pipeline using PySpark, Python and Databricks. Build orchestration workflow using Airflow & databricks Job workflow Develop and execute adhoc data ingestion to support business analytics. Proactively interact with vendors for any questions and report the status accordingly Explore and evaluate the tools/service to support business requirement Ability to learn to create a data-driven culture and impactful data strategies. Aptitude towards learning new technologies and solving complex problem. Qualifications : Minimum of bachelors degree. Preferably in Computer Science, Information system, Information technology. Minimum 5 years of experience on cloud platforms such as AWS, Azure, GCP. Minimum 5 year of experience in Amazon Web Services like VPC, S3, EC2, Redshift, RDS, EMR, Athena, IAM, Glue, DMS, Data pipeline & API, Lambda, etc. Minimum of 5 years of experience in ETL and data engineering using Python, AWS Glue, AWS EMR /PySpark and Airflow for orchestration. Minimum 2 years of experience in Databricks including unity catalog, data engineering Job workflow orchestration and dashboard generation based on business requirements Minimum 5 years of experience in SQL, Python, and source control such as Bitbucket, CICD for code deployment. Experience in PostgreSQL, SQL Server, MySQL & Oracle databases. Experience in MPP such as AWS Redshift, AWS EMR, Databricks SQL warehouse & compute cluster. Experience in distributed programming with Python, Unix Scripting, MPP, RDBMS databases for data integration Experience building distributed high-performance systems using Spark/PySpark, AWS Glue and developing applications for loading/streaming data into Databricks SQL warehouse & Redshift. Experience in Agile methodology Proven skills to write technical specifications for data extraction and good quality code. Experience with big data processing techniques using Sqoop, Spark, hive is additional plus Experience in data visualization tools including PowerBI, Tableau. Nice to have experience in UI using Python Flask framework anglular Mandatory Skills: Python for Insights Experience : 5-8 Years.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 16 Lacs
Hyderabad
Remote
Job description As an ETL Developer for the Data and Analytics team, at Guidewire you will participate and collaborate with our customers and SI Partners who are adopting our Guidewire Data Platform as the centerpiece of their data foundation. You will facilitate and be an active developer when necessary to operationalize the realization of the agreed upon ETL Architecture goals of our customers adhering to Guidewire best practices and standards. You will work with our customers, partners, and other Guidewire team members to deliver successful data transformation initiatives. You will utilize best practices for design, development, and delivery of customer projects. You will share knowledge with the wider Guidewire Data and Analytics team to enable predictable project outcomes and emerge as a leader in our thriving data practice. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who are self-motivated and take proactive actions for the benefit of our customers and ensure that they succeed in their journey to Guidewire Cloud Platform. You will collaborate closely with teams located around the world and adhere to our core values Integrity, Collegiality, and Rationality. Key Responsibilities: Build out technical processes from specifications provided in High Level Design and data specifications documents. Integrate test and validation processes and methods into every step of the development process Work with Lead Architects and provide inputs into defining user stories, scope, acceptance criteria and estimates. Systematic problem-solving approach, coupled with a sense of ownership and drive Ability to work independently in a fast-paced Agile environment Actively contribute to the knowledge base from every project you are assigned to. Qualifications: Bachelors or Masters Degree in Computer Science, or equivalent level of demonstrable professional competency, and 3 - 5 years + in a technical capacity building out complex ETL Data Integration frameworks. 3+ years of Experience with data processing and ETL (Extract, Transform, Load) and ELT (Extract, Load, and Transform) concepts. Experience with ADF or AWS Glue, Spark/Scala, GDP, CDC, ETL Data Integration, Experience working with relational and/or NoSQL databases Experience working with different cloud platforms (such as AWS, Azure, Snowflake, Google Cloud, etc.) Ability to work independently and within a team. Nice to have: Insurance industry experience Experience with ADF or AWS Glue Experience with the Azure data factory, Spark/Scala Experience with the Guidewire Data Platform.
Posted 2 weeks ago
8.0 - 12.0 years
15 - 30 Lacs
Gurugram
Work from Office
Role description Lead and mentor a team of data engineers to design, develop, and maintain high-performance data pipelines and platforms. Architect scalable ETL/ELT processes, streaming pipelines, and data lake/warehouse solutions (e.g., Redshift, Snowflake, BigQuery). Own the roadmap and technical vision for the data engineering function, ensuring best practices in data modeling, governance, quality, and security. Drive adoption of modern data stack tools (e.g., Airflow, Kafka, Spark etc.) and foster a culture of continuous improvement. Ensure the platform is reliable, scalable, and cost-effective across batch and real-time use cases. Champion data observability, lineage, and privacy initiatives to ensure trust in data across the org. Skills Bachelors or Masters degree in Computer Science, Engineering, or related technical field. 8+ years of hands-on experience in data engineering with at least 2+ years in a leadership or managerial role. Proven experience with distributed data processing frameworks such as Apache Spark, Flink, or Kafka. Strong SQL skills and experience in data modeling, data warehousing, and schema design. Proficiency with cloud platforms (AWS/GCP/Azure) and their native data services (e.g., AWS Glue, Redshift, EMR, BigQuery). Solid grasp of data architecture, system design, and performance optimization at scale. Experience working in an agile development environment and managing sprint-based delivery cycles.
Posted 2 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru, Karnataka
Work from Office
Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects
Posted 2 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Experience: Minimum of 10+ years in database development and management roles. SQL Mastery: Advanced expertise in crafting and optimizing complex SQL queries and scripts. AWS Redshift: Proven experience in managing, tuning, and optimizing large-scale Redshift clusters. PostgreSQL: Deep understanding of PostgreSQL, including query planning, indexing strategies, and advanced tuning techniques. Data Pipelines: Extensive experience in ETL development and integrating data from multiple sources into cloud environments. Cloud Proficiency: Strong experience with AWS services like ECS, S3, KMS, Lambda, Glue, and IAM. Data Modeling: Comprehensive knowledge of data modeling techniques for both OLAP and OLTP systems. Scripting: Proficiency in Python, C#, or other scripting languages for automation and data manipulation. Preferred Qualifications Leadership: Prior experience in leading database or data engineering teams. Data Visualization: Familiarity with reporting and visualization tools like Tableau, Power BI, or Looker. DevOps: Knowledge of CI/CD pipelines, infrastructure as code (e.g., Terraform), and version control (Git). Certifications: Any relevant certifications (e.g., AWS Certified Solutions Architect, AWS Certified Database - Specialty, PostgreSQL Certified Professional) will be a plus. Azure Databricks: Familiarity with Azure Databricks for data engineering and analytics workflows will be a significant advantage. Soft Skills Strong problem-solving and analytical capabilities. Exceptional communication skills for collaboration with technical and non-technical stakeholders. A results-driven mindset with the ability to work independently or lead within a team. Qualification: Bachelor's or masters degree in Computer Science, Information Systems, Engineering or equivalent. 10+ years of experience
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As the Lead Data Engineer at Mastercard, you will be responsible for designing and building scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. Your role will involve mentoring and guiding other engineers, fostering a culture of curiosity and continuous improvement, and creating robust ETL/ELT pipelines to serve business-critical use cases. You will lead by example by writing high-quality, testable code, participating in architecture and design discussions, and decomposing complex problems into scalable components aligned with platform and product goals. Championing best practices in data engineering, you will drive collaboration across teams, support data governance and quality efforts, and continuously learn and apply new technologies to improve team productivity and platform reliability. To succeed in this role, you should have at least 5 years of hands-on experience in data engineering with strong PySpark and Python skills. You should also possess solid experience in designing and implementing data models, pipelines, and batch/stream processing systems. Additionally, you should be comfortable working with cloud platforms such as AWS, Azure, or GCP and have a strong foundation in data modeling, database design, and performance optimization. A bachelor's degree in computer science, engineering, or a related field is required, along with experience in Agile/Scrum development environments. Experience with CI/CD practices, version control, and automated testing is essential, as well as the ability to mentor and uplift junior engineers. Familiarity with cloud-related services like S3, Glue, Data Factory, and Databricks is highly desirable. Furthermore, exposure to data governance tools and practices, orchestration tools, containerization, and infrastructure automation will be advantageous. A master's degree, relevant certifications, or contributions to open source/data engineering communities will be considered a bonus. Exposure to machine learning data pipelines or MLOps is also a plus. If you are a curious, adaptable, and driven individual who enjoys problem-solving and continuous improvement, and if you have a passion for building clean data pipelines and cloud-native designs, then this role is perfect for you. Join us at Mastercard and be part of a team that is dedicated to unlocking the potential of data assets and shaping the future of data engineering.,
Posted 2 weeks ago
3.0 - 6.0 years
12 - 16 Lacs
Thiruvananthapuram
Work from Office
AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Data pipeline development & orchestration using AWS Glue Strong programming skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 3 to 5 Years Client-facing and team leadership experience Candidates have to work with UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones
Posted 2 weeks ago
2.0 - 3.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - Data Engineer Sr.Analyst ACS Song Management Level:Level 10- Sr. Analyst Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink)Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 2 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Amazon Web Services (AWS) Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Additionally, you will monitor project progress, address any challenges that arise, and facilitate communication among team members to foster a productive work environment. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: Professional & Technical Skills: Primary:AWS + Python Secondary:Devops, TerraformGood To Have:AWS CDK3-4 Years of overall software development experience with strong hands on in AWS and Python.Hands on experience on AWS services EC2, Lambda, SNS, SQS, Glue, Step function, Cloud watch, API Gateway, EMR, S3, Dynamo DB, RDS, Athena.Hands on experience in writing Python code for AWS services like Glue job, Lambda and AWS CDK.Strong technical and Debugging hands on.Strong Devops experience in Terraform, Git and CI/CD.Experience working in Agile development environments.Strong verbal and written communication skills, with the ability to engage directly with clients. Additional Information:- The candidate should have minimum 5 years of experience in Amazon Web Services (AWS).- This position is based at our Bengaluru office.- A 15-year full time education is required.- Shift Timing:12:30 PM to 9:30 PM IST [Weekdays] Additional Information:- The candidate should have minimum 3 years of experience in Amazon Web Services (AWS).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 6.0 years
3 - 5 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Job Title: Data Engineer Snowflake & ETL Specialist Experience: 3–6 years (adjust as needed) Employment Type: Full-time Joiner-Immediate Location-Gurgaon Department: Data Engineering / Analytics Job Summary: We are seeking a skilled Data Engineer with strong hands-on experience in Snowflake, ETL development, and AWS Glue. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and data warehouse solutions that support enterprise-level analytics and reporting needs. Key Responsibilities: • Develop, optimize, and manage ETL pipelines using AWS Glue, Python, and Snowflake. • Design and implement data warehouse solutions and data models based on business requirements. • Work closely with data analysts, BI developers, and stakeholders to ensure clean, consistent, and reliable data delivery. • Monitor and troubleshoot performance issues related to data pipelines and queries in Snowflake. • Participate in code reviews, documentation, and knowledge-sharing activities. • Ensure data security, governance, and compliance with organizational
Posted 2 weeks ago
2.0 - 5.0 years
30 - 32 Lacs
Bengaluru
Work from Office
Data Engineer -2 (Experience 2-5 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 2 weeks ago
5.0 - 9.0 years
30 - 32 Lacs
Bengaluru
Work from Office
Data Engineering Manager 5-9 years Software Development Manager 9+ Years Kotak Mahindra BankBengaluru, Karnataka, India (On-site) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineering Manager / Software Development Manager 10+ years of engineering experience most of which is in Data domain 5+ years of engineering team management experience 10+ years of planning, designing, developing and delivering consumer software experience - Experience partnering with product or program management teams 5+ years of experience in managing data engineer, business intelligence engineers and/or data scientists Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Experience managing multiple concurrent programs, projects and development teams in an Agile environment Strong understanding of Data Platform, Data Engineering and Data Governance Experience partnering with product and program management teams - Experience designing and developing large scale, high-traffic applications PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As an AWS Data Engineer at Quest Global, you will be responsible for designing, developing, and maintaining data pipelines while ensuring data quality and integrity within the MedTech industry. Your key responsibilities will include designing scalable data solutions on the AWS cloud platform, developing data pipelines using Databricks and PySpark, collaborating with cross-functional teams to understand data requirements, optimizing data workflows for improved performance, and ensuring data quality through validation and testing processes. To be successful in this role, you should have a Bachelor's degree in Computer Science, Engineering, or a related field, along with at least 6 years of experience as a Data Engineer with expertise in AWS, Databricks, PySpark, and S3. You should possess a strong understanding of data architecture, data modeling, and data warehousing concepts, as well as experience with ETL processes, data integration, and data transformation. Excellent problem-solving skills and the ability to work in a fast-paced environment are also essential. In terms of required skills and experience, you should have experience in implementing Cloud-based analytics solutions in Databricks (AWS) and S3, scripting experience in building data processing pipelines with PySpark, and knowledge of Data Platform and Cloud (AWS) ecosystems. Working experience with AWS Native services such as DynamoDB, Glue, MSK, S3, Athena, CloudWatch, Lambda, and IAM is important, as well as expertise in ETL development, analytics applications development, and data migration. Exposure to all stages of SDLC, strong SQL development skills, and proficiency in Python and PySpark development are also desired. Additionally, experience in writing unit test cases using PyTest or similar tools would be beneficial. If you are a talented AWS Data Engineer looking to make a significant impact in the MedTech industry, we invite you to apply for this exciting opportunity at Quest Global.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You should have 2-7 years of experience in Noida, Gurugram, Indore, Pune, or Bangalore with a notice period of currently serving or immediate joiners. Your primary responsibilities will include having 2-6 years of hands-on experience with Big Data technologies like PySpark (Data frame and SparkSQL), Hadoop, and Hive. Additionally, you should have good experience with Python and Bash Scripts, a solid understanding of SQL and data warehouse concepts, and strong analytical, problem-solving, data analysis, and research skills. You should also demonstrate the ability to think creatively and independently, along with excellent communication, presentation, and interpersonal skills. It would be beneficial if you have hands-on experience with using Cloud Platform provided Big Data technologies such as IAM, Glue, EMR, RedShift, S3, and Kinesis. Experience in orchestration with Airflow and any job scheduler, as well as experience in migrating workloads from on-premise to cloud and cloud to cloud migrations, would be considered a plus.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
delhi
On-site
We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate should have a strong background in AI/ML analytics and a passion for utilizing data to drive business insights and innovation. Your main responsibilities will include developing and implementing machine learning models and algorithms, collaborating with project stakeholders to understand requirements and deliverables, analyzing and interpreting complex data sets using statistical and machine learning techniques, staying updated with the latest advancements in AI/ML technologies, and supporting various AI/ML initiatives by working with cross-functional teams. To qualify for this role, you should have a Bachelor's degree in Computer Science, Data Science, or a related field, along with a strong understanding of machine learning, deep learning, and Generative AI concepts. Preferred skills for this position include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Additionally, expertise in cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue), building secure data ingestion pipelines for unstructured data, proficiency in Python, TypeScript, NodeJS, ReactJS, data visualization tools, deep learning frameworks, version control systems, and Generative AI/LLM based development is desired. Good to have skills include knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer, valuing cross-cultural insight and competence for ongoing success, with a belief that a diverse workforce enhances perspectives and ideas for continuous improvement.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
The company Nasdaq Technology is seeking a dedicated Integration Technical Specialist with expertise in building custom integrations in financial systems to join the Bangalore technology center in India. If you are driven by innovation and effectiveness, this opportunity is for you! Nasdaq is at the forefront of market revolution and technology transformation, constantly striving to develop innovative solutions by embracing new technologies. As a senior technical analyst, your role will involve delivering complex technical systems to both new and existing customers and exploring new technologies in the FinTech industry. The Enterprise Solutions team in Bangalore is looking for a Technical Integration Specialist to drive central initiatives across Nasdaqs software products and services portfolio. Candidates who are passionate about delivering top technology solutions to today's markets are encouraged to apply. Key Responsibilities: - Engage in cross-functional work globally to deliver solutions and services to Nasdaqs finance processes - Interact with internal customers to design solutions, build relationships, and establish trust with key stakeholders - Collaborate with colleagues locally and in other countries to deliver sophisticated technology solutions - Receive, analyze, and address technical user inquiries and requests - Provide technical solutions aligned with business requirements and configure applications accordingly - Build and maintain integrations with internal systems and third-party vendors - Conduct end-to-end testing and develop test cases - Participate in various phases of the Software Development Process - Ensure the quality of work by following established processes - Collaborate with multiple IT and business groups Requirements: - 10 to 13 years of experience in integration development - Expertise in Web services such as REST and SOAP API programming - Experience with Informatica Cloud and ETL processes - Strong understanding of AWS services like S3, Lambda, and Glue - Bachelors or Masters degree in computer science or related engineering fields Desired Skills: - Proficiency in Workday Integration tools, Report Writer, and Calculated Fields - Knowledge of finance organization processes including Billing, Accounts Receivable, GL accounting, and Planning & Forecasting - Experience in multinational, multi-geographic companies Nasdaq offers a vibrant and entrepreneurial work environment where individuals are encouraged to take initiative and intelligent risks. The company values work-life balance, well-being, and a culture of connectedness and support. Benefits include an annual monetary bonus, opportunities to become a Nasdaq shareholder, health insurance, flexible working schedules, and various employee programs for growth and development. If this opportunity aligns with your expertise and career goals, submit your application in English at your earliest convenience. Nasdaq will provide updates on the selection process within 2-3 weeks.,
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of a Senior Principal Consultant-AWS Developer! We are looking for candidates who have a passion for cloud with knowledge of different cloud environments. Ideal candidates should have technical experience in AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. This key role demands a highly motivated individual with a strong background in Computer Science/ Software Engineering. You are meticulous, thorough and possess excellent communication skills to engage with all levels of our stakeholders. A self-starter, you are up-to-speed with the latest developments in the tech world. Responsibilities . Hands-On experience & good skills on AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. . Must have good working knowledge on Kubernetes & Dockers. . Utilize AWS services such as Amazon Glue, Amazon S3, AWS Lambda, and others to optimize performance, reliability, and cost-effectiveness. . Develop scripts, utilities, and automation tools to facilitate the migration process and ensure compatibility with AWS services. . Implement best practices for security, scalability, and fault tolerance in AWS-based solutions. . Experience in AWS Cost Analysis & thorough understanding to optimize AWS Cost. . Must have good working knowledge on deployment templates like Terraform%5CCloud formation. . Ability to multi-task and manage various project elements simultaneously. Qualifications we seek in you! Minimum Qualifications / Skills . Bachelor&rsquos Degree with experience in Information Technology. . Must have experience in AWS Platform Services. Preferred Qualifications/ Skills . Very good written and presentation / verbal communication skills with experience of customer interfacing role. . In-depth requirement understanding skills with good analytical and problem-solving ability, interpersonal efficiency, and positive attitude. . Experience in ML/ AI . Experience in the telecommunication industry . Experience with cloud providers (e.g., AWS, GCP) Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Principal Consultant, Data Engineer In this role, you will be responsible for coding, testing and delivering high quality deliverables, and should be willing to learn new technologies . Responsibilities Identify , design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. Generate, present, and develop ideas for progressing the data environment, such as common frameworks or common methodologies, which the IM IT division will then use. Possess technical curiosity to explore new features in existing tools and technologies as well as explore new methodologies, features and tools that can be adopted by the IM IT division. Share, instruct, and coach the Investment Technology division on data topics such as best practices, design methodologies, and query optimization. Organize, liaison, and work with other development teams to onboard applications and processes onto the new data architecture (cloud technologies, replication, etc.). Desire to learn and become the subject matter expert on tools and technologies. Act as a liaison to the various development teams and other data teams to market and proliferate the data architecture doctrines, principles and standards. Help devise and implement pragmatic data governance principles and methodology . Perform detailed data analysis and validation. Assist with preparation, coordination, and execution of User Acceptance Testing. Qualifications we seek in you! Minimum Qualifications BE/B Tech/MCA Excellent written and verbal communication skills Preferred Qualifications/ Skills Python, SQL, Spark/ PySpark, AWS Glue, AWS Aurora (preferably Postgres), AWS S3, dbt . Strong relational database design. Experience with multi-temporal data. Great Expectations, Jasper Reports, workflow experience (BPMN), .NET Excellent verbal and written skills. Ability to work in a team environment. Ability to work effectively and efficiently with supervision. Capable of managing multiple tasks with tight time deadlines. Possess strong analytical ability and excellent attention to detail. Maintain a strong commitment to quality. Strong Excel skills. Strong analytical skills. Tools/Technologies used: SQL, SQL Server, ETL tools, Autosys, Snowflake/Cloud/Azure experience a plus Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career&mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
2.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Amazon Web Services (AWS) Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive successful project outcomes. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring adherence to best practices in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application development processes to increase efficiency. Professional & Technical Skills: Primary:AWS + Python Secondary:Devops, TerraformGood To Have:AWS CDK8-10 Years of overall software development experience with 5 years in AWS + 3 years on Python.Hands on experience on AWS services EC2, Lambda, SNS, SQS, Glue, Step function, Cloud watch, API Gateway, EMR, S3, Dynamo DB, RDS, Athena.Hands on experience in writing Python code for AWS services like Glue job, Lambda and AWS CDK.Strong technical and Debugging hands on.2+ years of Devops experience in Terraform, Git and CI/CD.Experience working in Agile development environments.Strong verbal and written communication skills, with the ability to engage directly with clients. Additional Information:- The candidate should have minimum 5 years of experience in Amazon Web Services (AWS).- This position is based at our Bengaluru office.- A 15 years full time education is required.- Shift Timing:12:30 PM to 9:30 PM IST [Weekdays] Qualification 15 years full time education
Posted 2 weeks ago
4.0 - 8.0 years
0 - 0 Lacs
Pune
Hybrid
So, what’s the role all about? Within Actimize, the AI and Analytics Team is developing the next generation advanced analytical cloud platform that will harness the power of data to provide maximum accuracy for our clients’ Financial Crime programs. As part of the PaaS/SaaS development group, you will be responsible for developing this platform for Actimize cloud-based solutions and to work with cutting-edge cloud technologies. We are looking for a Specialist software engineer with 8+ years of experience and a strong track record of leading the team as Scrum Master. You will play a key role in building advanced analytics Big data capabilities that power fraud prevention and Anti-Money Laundering solutions for some of the world’s largest financial institutions. How will you make an impact? NICE Actimize is the largest and broadest provider of financial crime, risk and compliance solutions for regional and global financial institutions and has been consistently ranked as number one in the space. At NICE Actimize, we recognize that every employee’s contributions are integral to our company’s growth and success. To find and acquire the best and brightest talent around the globe, we offer a challenging work environment, competitive compensation and benefits, and rewarding career opportunities. Come share, grow and learn with us – you’ll be challenged, you’ll have fun and you’ll be part of a fast-growing, highly respected organization. This new SaaS platform will enable our customers (some of the biggest financial institutes around the world) to create solutions on the platform to fight financial crime. Have you got what it takes? Design and develop quality, proficient and enterprise grade solutions in AWS cloud environment that satisfies business requirements. Work as part of the development team towards the application. Adhere and contribute to software best engineering practices. Work with software engineers, architects, and managers in the design process for software products and services. Work and collaborate in multi-disciplinary Agile teams, adopting Agile spirit, methodology and tools. Participate in reviewing design and code for other team members. Pro-actively contribute to process improvement activities. Test your code using Unit/System tests and automation. Fix bugs and care about enterprise grade quality. Assist support-team to resolve production issues as quickly as possible. Qualifications: Degree in Computer Science or a related discipline (BE/BTech/MTech/MCA) 7-10 years hands-on software development experience with Python/ETL Experience leading a team as Tech Lead or Scrum Master . 4 years+ Experience developing ETL/Big-Data in Spark using PySpark/Scala Experience working with PaaS/SaaS products. Experience with DevOps practices, CI/CD(Jenkins) & tools like JIRA, Confluence, Terraform Experience with micro services and docker containers. Huge advantage: Hands-on experience with AWS components like API Gateway, Glue, Lambda, Step-Function, S3, CloudWatch, DynamoDB, SQS, etc Excellent spoken/written English. Self-driven with a strong sense of ownership High caliber team player, self-starter and ability to work independently. Comfortable working with high volume data in cloud. Preferred Qualifications: Experience in finance or banking domains. What’s in it for you? Join an ever-growing, market-disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NICE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7866 Reporting into: Tech Manager Role Type: Specialist Software Engineer
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Bangalore Rural, Bengaluru
Hybrid
Responsibilities 3+ years of hands on application to lead and perform the development with experience in one or more programming languages like Python, Pyspark etc. 4+ years of hands on experience in development and deployment of cloud native solutions leveraging AWS Services: Compute(EC2, Lambda), Storage (S3), Database (RDS, Aurora, Postgres, DynamoDB), Orchestration (Apache Airflow, Step Function, SNS), ETL/Analytics(Glue, EMR, Athena, Redshift), Infra (Cloud Formation, Code Pipeline), Data Migration (AWS DataSync, AWS DMS), APIGateway, IAM etc. Expertise in the handling large data sets and data models in terms of design, data model creation, development of data pipeline for data ingestion, migration and transformation Strong on SQL Server, stored procedure Knowledge on API's , SSO, streaming technology will be nice to have Mandatory Skill Sets AWS, Pyspark, Spark Glue, Lambda Years Of Experience Required - 5+ Years. Education Qualification B.Tech / M.Tech / MBA / MCA
Posted 2 weeks ago
1.0 - 2.0 years
6 - 10 Lacs
Mumbai, Hyderabad, Chennai
Work from Office
Your Role You would be working Enterprise Data Management Consolidation (EDMCS) Enterprise Profitability & Cost Management Cloud Services (EPCM) Oracle Integration cloud (OIC). Full life cycle Oracle EPM Cloud Implementation. Creating forms, OIC Integrations, and complex Business Rules. Understanding dependencies and interrelationships between various components of Oracle EPM Cloud. Keep abreast of Oracle EPM roadmap and key functionality to identify opportunities where it will enhance the current process within the entire Financials ecosystem. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Your Profile Experience in Implementation in EDMCS Modules Proven ability to collaborate with internal clients in an agile manner, leveraging design thinking approaches. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Experience of Python, AWS Cloud (Lambda, Step functions, EventBridge etc.) is preferred. What you"ll love about capgemini You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications. About Capgemini Location - Hyderabad,Chennai,Mumbai,Pune,Bengaluru
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough