Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
10 - 18 Lacs
Pune
Work from Office
Roles and Responsibilities Design, develop, and maintain automated testing frameworks using AWS services such as Glue, Lambda, Step Functions, etc. Develop data pipelines using Delta Lake and ETL processes to extract insights from large datasets. Collaborate with cross-functional teams to identify requirements for test cases and create comprehensive test plans. Ensure high-quality deliverables by executing thorough testing procedures and reporting defects. Desired Candidate Profile 3-7 years of experience in QA Automation with a focus on AWS native stack (Glue, Lambda). Strong understanding of SQL concepts and ability to write complex queries. Experience working with big data technologies like Hadoop/Hive/Pyspark is an added advantage.
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Dear Applicants, Astrosoft Technologies is back with Exciting Job opportunity Join our Growing Team & be a part to explore it. We are Hiring 'Sr. AWS Data Engineer' ! Hyderabad (Hybrid) ! 3 Tech Rounds ! Early Joiners ! Apply Here to Email - Karthik.Jangam@astrosofttech.com with your Updated Resume & Requested details to reach you out: Total Experience- AWS DE Services Current Location- Current Company- C-CTC- Ex-CTC – Offer (Y/N)– Notice Period (Max-20 Days)– Ready to Relocate Hyderabad (Y/N) – Company: AstroSoft Technologies (https://www.astrosofttech.com/) Astrosoft is an award-winning company that specializes in the areas of Data, Analytics, Cloud, AI/ML, Innovation, Digital. We have a customer first mindset and take extreme ownership in delivering solutions and projects for our customers and have consistently been recognized by our clients as the premium partner to work with. We bring to bear top tier talent, a robust and structured project execution framework, our significant experience over the years and have an impeccable record in delivering solutions and projects for our clients. Founded in 2004, Headquarters in FL,USA, Corporate Office - India, Hyderabad Benefits Program: H1B Sponsorship (Depends on Project & Performance) Lunch & Dinner (Every day) Health Insurance Coverage- Group Industry Standards Leave Policy Skill Enhancement Certification Hybrid Mode Job Summary: Strong experience and understanding of streaming architecture and development practices using Kafka, spark, flink etc., Strong AWS development experience using S3, SNS, SQS, MWAA (Airflow) Glue, DMS and EMR. Strong knowledge of one or more programing languages Python/Java/Scala (ideally Python) Experience using Terraform to build IAC components in AWS. Strong experience with ETL Tools in AWS; ODI experience is as plus. Strong experience with Database Platforms: Oracle, AWS Redshift Strong experience in SQL tuning, tuning ETL solutions, physical optimization of databases. Very familiar with SRE concepts which includes evaluating and implementing monitoring and observability tools like Splunk, Data Dog, CloudWatch and other job, log or dashboard concepts for customer support and application health checks. Ability to collaborate with our business partners to understand and implement their requirements. Excellent interpersonal skills and be able to build consensus across teams. Strong critical thinking and ability to think out-of-the box. Self-motivated and able to perform under pressure. AWS certified (preferred) Thanks & Regards Karthik Kumar (HR TAG Lead -India) Astrosoft Technologies India Private Limited Contact: +91-8712229084 Email: karthik.jangam@astrosofttech.com
Posted 1 month ago
4.0 - 9.0 years
0 - 3 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Dear Candidate, Interested candidates : Please share your updated resume along with Photo, Pan card and PF member service history. Seasons Greetings! Role: Python Data Engineer Work Nature - Contract To hire (3rd Party Payroll) Work Location Pan India Total Experience 4+ Yrs Immediate Joiners only Email: tuppari.pradeep@firstmeridianglobal.com Job Description: 4+ years of experience in backend development with Python. Strong experience with AWS services and cloud architecture. Proficiency in developing RESTful APIs and microservices. Experience with database technologies such as SQL, PostgreSQL, and NoSQL databases. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Knowledge of CI/CD pipelines and tools such as Jenkins, GitLab CI, or AWS CodePipeline. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Interested candidates : Please share your updated resume along with Photo, Pan card and PF member service history to the mentioned email: tuppari.pradeep@firstmeridianglobal.com Element Details Full Name as Per Govt Proofs: Contact Number Alternate Contact Number Email ID Date Of Birth Fathers Name Total Experience Relevant Experience PAN Card Number Current CTC Expected CTC Current Work Location Preferred Location Open for Relocation (Yes/No)- Current Company Name Notice Period Mode of Employment (Contract/Permanent) If Contract, please provide payroll company: Do you know anyone who could be interested in this profile? Feel free to share this email with the person. You might make their day just by forwarding this email. Regards Pradeep tuppari.pradeep@firstmeridianglobal.com
Posted 1 month ago
7.0 - 12.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications: 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience: Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to Analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM.
Posted 1 month ago
5.0 - 10.0 years
22 - 25 Lacs
Hyderabad
Work from Office
Role & responsibilities - We are seeking a talented data modeler to assist with the design and implementation of company databases. Mandatory skills required to work - Data base management, SQL, NoSQL, Data modelling, ERWin, IBM data architect, ETL, Talend, Apache, Microsoft SSI, Big data understanding, Hadoop, Athena and redshift Cloud computing AWS must and should. As a data modeler, you will be working closely with data architects and data analysts to implement data modeling solutions in order to streamline and support enterprise information management. To ensure success as a data modeler, you should have in-depth knowledge of data warehousing, as well as expert communication skills. Ultimately, a top-notch data modeler should be able to design models that reduce data redundancy, streamline data movements, and improve enterprise information management. Collaborate with business analysts and stakeholders to understand business processes and requirements, translating them into data modelling solutions. Design and develop logical and physical data models that effectively capture the granularity of data necessary for analytical and reporting purposes. Migrate and optimize existing data models from traditional on-premises data stores to Azure/Databricks cloud environments, ensuring scalability and performance. Establish data modelling standards and best practices to maintain the integrity and consistency of the data architecture. Work closely with data engineers and BI developers to ensure that the data models support the needs of analytical and operational reporting. Conduct data profiling and analysis to understand data sources, relationships, and quality, informing the data modelling process. Continuously evaluate and refine data models to accommodate evolving business needs and to leverage new data modelling techniques and cloud capabilities. Document data models, including entity-relationship diagrams, data dictionaries, and metadata, to provide clear guidance for development and maintenance. Provide expertise in data modelling and data architecture to support the development of data governance policies and procedures. Preferred candidate profile - We are a startup and expect each team member to wear multiple hats, take initiative, and spot and solve problems. Candidates with a healthcare reimbursement background is preferable. Bachelor's degree in computer science, information technology, or a similar field. 5 years of hands-on experience with physical and relational data modeling. Expert knowledge of metadata management and related tools. Knowledge of mathematical foundations and statistical analysis. Strong interpersonal skills. Experience with team management. Excellent communication and presentation skills. Advanced troubleshooting skills. Knowledge, Skills, Behaviors: Minimum of 5 years of experience in data modelling, with a strong background in both traditional RDBMS and modern cloud-based data platforms. Proficiency in SQL and experience with data modelling tools (e.g., ER/Studio, ERwin, Power Designer). Familiarity with Azure cloud services, Databricks, and other big data technologies. Understanding of data warehousing concepts, including dimensional modelling, star schemas, and snowflake schemas. Ability to translate complex business requirements into effective data models that support analytical and reporting functions. Strong analytical skills and attention to detail. Excellent communication and collaboration abilities, with the capacity to engage with both technical and non-technical stakeholders.
Posted 1 month ago
4.0 - 9.0 years
0 - 3 Lacs
Hyderabad
Hybrid
Urgent Hiring for AWS Data Engineer for Client DELOITTE for CTH position Hi, Seasons Greetings! Role: AWS Data Engineer Work Nature - Contract To hire (3rd Party Payroll) Work Location Hyderabad Total Experience 4+ Yrs Immediate Joiners only •Job Description: • AWS Data Engineer Hands-on experience with AWS services including S3 , Lambda,Glue, API Gateway , and SQL. Strong skills in data engineering on AWS, with proficiency in Python ,pyspark & SQL . Experience with batch job scheduling and managing data dependencies. Knowledge of data processing tools like Spark and Airflow . Automate repetitive tasks and build reusable frameworks to improve efficiency. Provide Run/DevOps support and manage the ongoing operation of data services Please share your updated resume along with the below details: Element Details Full Name as Per Govt Proofs: Contact Number Alternate Contact Number Email ID Date Of Birth Fathers Name Total Experience Relevant Experience PAN Card Number Current CTC Expected CTC Current Work Location Preferred Location Open for Relocation (Yes/No)- Current Company Name Notice Period Mode of Employment (Contract/Permanent) If Contract, please provide payroll company: Do you know anyone who could be interested in this profile? Feel free to share this email with the person. You might make their day just by forwarding this email. Mounika.t@affluentgs.com 7661922227
Posted 1 month ago
2.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organisations seeking independent talent.. Flexing It has partnered with Our client, a global leader in energy management and automation, is seeking a Data engineer to prepare data and make it available in an efficient and optimized format for their different data consumers, ranging from BI and analytics to data science applications. It requires to work with current technologies in particular Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on AWS environment.. Key Responsibilities:. Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or for building new data objects. Working in ingestion patterns allow to automate the data pipelines.. Participate to and apply DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. This can include the design and implementation of end-to-end data integration tests and/or CICD pipelines.. Analyze existing data models, identify and implement performance optimizations for data. ingestion and data consumption. The objective is to accelerate data availability within the. platform and to consumer applications.. Support client applications in connecting and consuming data from the platform, and ensure they follow our guidelines and best practices.. Participate in the monitoring of the platform and debugging of detected issues and bugs. Skills required:. Minimum of 3 years prior experience as data engineer with proven experience on Big Data and Data Lakes on a cloud environment.. Bachelor or Master degree in computer science or applied mathematics (or equivalent). Proven experience working with data pipelines / ETL / BI regardless of the technology.. Proven experience working with AWS including at least 3 of: RedShift, S3, EMR, Cloud. Formation, DynamoDB, RDS, lambda.. Big Data technologies and distributed systems: one of Spark, Presto or Hive.. Python language: scripting and object oriented.. Fluency in SQL for data warehousing (RedShift in particular is a plus).. Good understanding on data warehousing and Data modelling concepts. Familiar with GIT, Linux, CI/CD pipelines is a plus.. Strong systems/process orientation with demonstrated analytical thinking, organization. skills and problem-solving skills.. Ability to self-manage, prioritize and execute tasks in a demanding environment.. Strong consultancy orientation and experience, with the ability to form collaborative,. productive working relationships across diverse teams and cultures is a must.. Willingness and ability to train and teach others.. Ability to facilitate meetings and follow up with resulting action items. Show more Show less
Posted 1 month ago
4.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
???? We’re Hiring: Delivery Solution Architect – Data Analytics (AWS) ????. ???? Location: Remote. ???? Level: Mid to Senior (3–5+ years experience). Are you passionate about turning complex data challenges into elegant, scalable solutions on the AWS cloud? We're looking for a Delivery Solution Architect – Data Analytics to join our growing team and take the lead in architecting and delivering next-gen data platforms that drive real business impact.. ???? About the Role:. As a Delivery Solution Architect, you will play a pivotal role in designing and implementing end-to-end data analytics solutions on AWS. You’ll collaborate with cross-functional teams and lead a group of 5–10 consultants and engineers to bring modern data architectures to life—powering business intelligence, machine learning, and operational insights.. ????? Key Responsibilities:. Lead the design and delivery of data analytics solutions using AWS services (Redshift, EMR, Glue, Kinesis, etc.). Collaborate with project teams, clients, and sales stakeholders to craft technical proposals and solution blueprints. Design scalable, secure, high-performance data models and ETL pipelines. Optimize data platforms for cost-efficiency, query performance, and concurrency. Ensure data governance, security, and compliance with best practices. Troubleshoot technical issues and provide mentorship to engineering teams. Stay ahead of industry trends and bring innovative solutions to the table. Report to practice leads, contribute to documentation, and support deployment activities. ???? Qualifications:. 3–5 years of experience as a Solution Architect or Technical Lead in data analytics delivery. Hands-on expertise with AWS data tools (Redshift, EMR, Glue, Kinesis, etc.). Proficiency in SQL and Python; strong data modeling and ETL experience. Knowledge of Microsoft Azure Data Analytics tools is a plus. Experience working in Agile teams and using version control (e.g., Git). Strong communication skills and ability to collaborate with technical and non-technical stakeholders. AWS Certifications (Solutions Architect & Data Analytics – Specialty) are required. ???? Preferred Skills:. Team leadership in project delivery environments. Familiarity with data governance, data quality, and metadata management. Documentation, proposal writing, and client engagement skills. ???? What’s In It For You?. Opportunity to work with advanced AWS data technologies. Be part of a collaborative, innovation-focused team. Shape data strategies that directly impact enterprise decision-making. Career growth in a cloud-first, analytics-driven environment. ???? Ready to architect the future of data? Apply now or reach out to learn more!. #AWSJobs #DataAnalytics #SolutionArchitect #Hiring #AWSCareers #CloudComputing #DataEngineering #Redshift #Glue #AzureData #TechJobs. Show more Show less
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Mangaluru
Work from Office
Job Summary. As a DevOps Engineer specializing in data, you will be responsible for implementing and managing our cloud-based data infrastructure using AWS and Snowflake. You will collaborate with data engineers, data scientists, and other stakeholders to design, deploy, and maintain a robust data ecosystem that supports our analytics and business intelligence initiatives. Your expertise in modern data tech stacks, MLOps methodologies, automation, and information security will be crucial in enhancing our data pipelines and ensuring data integrity and availability.. Technical Skills. 3+ years of experience in the field of Data Warehousing and BI. Experience working with Snowflake Database. In depth knowledge of Data Warehouse concepts.. Experience in designing, developing, testing, and implementing ETL solutions using enterprise ETL tools .. Experience with large or partitioned relational databases (Aurora / MySQL / DB2). Very strong SQL and data analysis capabilities. Familiarly with Billing and Payment data is a plus. Agile development (Scrum) experience. Other preferred experience includes working with DevOps practices, SaaS, IaaS, code management (CodeCommit, git), deployment tools (CodeBuild, CodeDeploy, Jenkins, Shell scripting), and Continuous Delivery. Primary AWS development skills include S3, IAM, Lambda, RDS, Kinesis, APIGateway, Redshift, EMR, Glue, and CloudFormation. Responsibilities. Be a key contributor to the design and development of a scalable and cost-effective cloud-based data platform based on a data lake design. Develop data platform components in a cloud environment to ingest data and events from cloud and on-premises environments as well as third parties. Build automated pipelines and data services to validate, catalog, aggregate and transform ingested data. Build automated data delivery pipelines and services to integrate data from the data lake to internal and external consuming applications and services. Build and deliver cloud-based deployment and monitoring capabilities consistent with DevOps models. Deep knowledge and skills current with the latest cloud services, features and best practices. Who We Are:. unifyCX is an emerging Global Business Process Outsourcing company with a strong presence in the U.S., Colombia, Dominican Republic, India, Jamaica, Honduras, and the Philippines. We provide personalized contact centers, business processing, and technology outsourcing solutions to clients worldwide. In nearly two decades, unifyCX has grown from a small team to a global organization with staff members all over the world dedicated to supporting our international clientele.. At unifyCX, we leverage advanced AI technologies to elevate the customer experience (CX) and drive operational efficiency for our clients. Our commitment to innovation positions us as a trusted partner, enabling businesses across industries to meet the evolving demands of a global market with agility and precision.. unifyCX is a certified minority-owned business and an EOE employer who welcomes diversity.. Show more Show less
Posted 1 month ago
3.0 - 7.0 years
8 - 12 Lacs
Mangaluru
Work from Office
Job Summary. As a DevOps Engineer specializing in data, you will be responsible for implementing and managing our cloud-based data infrastructure using AWS and Snowflake. You will collaborate with data engineers, data scientists, and other stakeholders to design, deploy, and maintain a robust data ecosystem that supports our analytics and business intelligence initiatives. Your expertise in modern data tech stacks, MLOps methodologies, automation, and information security will be crucial in enhancing our data pipelines and ensuring data integrity and availability.. Technical Skills. 3+ years of experience in the field of Data Warehousing and BI. Experience working with Snowflake Database. In depth knowledge of Data Warehouse concepts.. Experience in designing, developing, testing, and implementing ETL solutions using enterprise ETL tools .. Experience with large or partitioned relational databases (Aurora / MySQL / DB2). Very strong SQL and data analysis capabilities. Familiarly with Billing and Payment data is a plus. Agile development (Scrum) experience. Other preferred experience includes working with DevOps practices, SaaS, IaaS, code management (CodeCommit, git), deployment tools (CodeBuild, CodeDeploy, Jenkins, Shell scripting), and Continuous Delivery. Primary AWS development skills include S3, IAM, Lambda, RDS, Kinesis, APIGateway, Redshift, EMR, Glue, and CloudFormation. Responsibilities. Be a key contributor to the design and development of a scalable and cost-effective cloud-based data platform based on a data lake design. Develop data platform components in a cloud environment to ingest data and events from cloud and on-premises environments as well as third parties. Build automated pipelines and data services to validate, catalog, aggregate and transform ingested data. Build automated data delivery pipelines and services to integrate data from the data lake to internal and external consuming applications and services. Build and deliver cloud-based deployment and monitoring capabilities consistent with DevOps models. Deep knowledge and skills current with the latest cloud services, features and best practices. Who We Are:. unifyCX is an emerging Global Business Process Outsourcing company with a strong presence in the U.S., Colombia, Dominican Republic, India, Jamaica, Honduras, and the Philippines. We provide personalized contact centers, business processing, and technology outsourcing solutions to clients worldwide. In nearly two decades, unifyCX has grown from a small team to a global organization with staff members all over the world dedicated to supporting our international clientele.. At unifyCX, we leverage advanced AI technologies to elevate the customer experience (CX) and drive operational efficiency for our clients. Our commitment to innovation positions us as a trusted partner, enabling businesses across industries to meet the evolving demands of a global market with agility and precision.. unifyCX is a certified minority-owned business and an EOE employer who welcomes diversity.. Show more Show less
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
About Us. At ANZ, we're applying new ways technology and data can be harnessed as we work towards a common goal: to improve the financial wellbeing and sustainability of our millions of customers.. Our community of over 5,000 engineers is key to making this happen, because technology underpins every part of our business from delivering tools, apps and services for our customers, to building a bank for the future. About The Role. As a Software Engineer in our Team you’ll play a key role in helping to deliver innovative and stable technology solutions for the Markets business. This role will work with the Product Owner, Business Representatives, and your fellow team members to design, develop, implement and support the Institutional Term Deposits application primarily, whilst giving you to opportunity to learn other systems in our space.. Banking is changing and we’re changing with it, giving our people great opportunities to try new things, learn and grow. Whatever your role at ANZ, you’ll be building your future, while helping to build ours.. Role Location : India, Bangalore. Role Type : Permanent. What will your day look like?. As a Software Engineer, you will:. Perform analysis and translation of business requirements into functional/non functional system specifications/stories,. Perform the impact analysis of change, estimating, performing peer reviews, communication with business customers and stakeholders. key role in supporting solutions through consulting, analysis and design phase of the SDLC for the purposes of developing new and/or enhancing the applications.. The role also requires the incumbent to lead and mentor the less experienced Engineers and drive improvement in the engineering competency as well as managing process improvement.. This role will also require carrying out the second level application support and also managing all the application that is supported to meet the SLA.. What will you bring?. To grow and be successful in this role, you will ideally bring the following:. Technical skills including C#, ASP.NET MVC, ASP.NET WebAPI, dotnet core, web frontend HTML5 CSS Javascript jQuery, SQL using Microsoft SQL database. Experience with DevOps, CI/CD setup, Git Source Control. Proven analytical skills and data-driven decision making. AWS cloud understanding (EC2, RDS, Glue ETL). Ability to quickly gather information on the progress of work and communicate this to key stakeholders. Identify, monitor and manage risks, issues and dependencies, agreeing appropriate risk responses. Good to have:. C++, Python, Powershell. Infrastructure as code using Terraform. Single-sign-on (SSO) technologies, e.g. SAML, OpenID Connect (OIDC), OAuth2.0, JWT. one or more of Reactjs, Angularjs, Vue, or other mainstream javascript framework. An understanding of the ITD system and/or the Deposits business would be advantageous. You’re not expected to have 100% of these skills. At ANZ a growth mindset is at the heart of our culture, so if you have most of these things in your toolbox, we’d love to hear from you.. Job Posting End Date. 16/06/2025 , 11.59pm, (Melbourne Australia). Show more Show less
Posted 1 month ago
6.0 - 8.0 years
12 - 15 Lacs
Pune, Chennai
Work from Office
Required Skills: Min. 6 Years of Experience in Data Engineering / Backend Data Processing. Strong Hands-On Experience with Python for Data Processing. Expertise In Apache Spark (Pyspark Preferred) Advanced Proficiency in Sql
Posted 1 month ago
3.0 - 4.0 years
20 - 25 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
3-4 years hands-on experience on AWS services, ideally SaaS in the cloud Experience developing solutions with code/scripting language must have Python experience (e.g, python, Node.js) Experience in creating and configuring AWS resources like API Gateway, CloudWatch, Cloud-Formation, EC2, Lambda, Amazon Connect, SNS, Athena, Glue, VPC etc.Sourcing & Screening US profiles Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad ,Remote
Posted 1 month ago
5.0 - 8.0 years
20 - 25 Lacs
Pune
Hybrid
Software Engineer Baner, Pune, Maharashtra Department Software & Automation Employee Type Permanent Experience Range 5 - 8 Years Qualification: Bachelor's or master's degree in computer science, IT, or related field. Roles & Responsibilities: Technical Role: Architect and build scalable data pipelines using AWS and Databricks. Integrate data from sensors (Cameras, Lidars, Radars). Deliver proof-of-concepts and support system improvements. Ensure data quality and scalable design in solutions. Strong Python, Databricks (SQL, PySpark, Workflows), and AWS skills. Solid leadership and mentoring ability. Agile development experience. Additional Skill: Good to Have: AWS/Databricks certifications. Experience with Infrastructure as Code (Terraform/CDK). Exposure to machine learning data workflows. Software Skills: Python Databricks (SQL, PySpark, Workflows) AWS (S3, EC2, Glue) Terraform/CDK (good to have)
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) Job Summary: We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities: Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and storage including the concept of SCD type2. Collaborate with data scientists, analysts, backend and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Build logical and physical data model using any data modeling tool Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, ). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), or GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and other data Solid grasp of data warehousing principles, data modeling techniques, and performance (e.g. Erwin Data Modeler, MySQL Workbench) Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 1 month ago
5.0 - 8.0 years
5 - 15 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Tamil Nadu Timings: 1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri) About the Role We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and Collaborate with data scientists, analysts, and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Qualifications & Skills: Bachelors or masters degree in computer science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, Prefect). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and contemporary data stack Solid grasp of data warehousing principles, data modeling techniques, and performance Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 1 month ago
10.0 - 15.0 years
10 - 15 Lacs
Bengaluru, Karnataka, India
On-site
Work closely with the Product Owners and stake holders to design the Technical Architecture for data platform to meet the requirements of the proposed solution. Work with the leadership to set the standards for software engineering practices within the machine learning engineering team and support across other disciplines Play an active role in leading team meetings and workshops with clients. Choose and use the right analytical libraries, programming languages, and frameworks for each task. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for products, help the team to close the backlogs in right time. Refactor code into reusable libraries, APIs, and tools. Help us to shape the next generation of our products. What Were Looking For Total experience in data management area for 10 + years experience in the implementation of modern data ecosystems in AWS/Cloud platforms. Strong experience with AWS ETL/File Movement tools (GLUE, Athena, Lambda, Kinesis and other AWS integration stack) Strong experience with Agile Development, SQL Strong experience with Two or Three AWS database technologies (Redshift, Aurora, RDS,S3 & other AWS Data Service ) covering security, policies, access management Strong programming Experience with Python and Spark Strong learning curve for new technologies Experience with Apache Airflow & other automation stack. Excellent with Data Modeling. Excellent oral and written communication skills. A high level of intellectual curiosity, external perspective, and innovation interest Strong analytical, problem solving and investigative skills Experience in applying quality and compliance requirements.
Posted 1 month ago
5.0 - 10.0 years
15 - 27 Lacs
Mumbai, Hyderabad, Bengaluru
Hybrid
Primary Skill: AWS Glue, Python,Pyspark. Secondary Skill: RDS,Redshift,Snowflake Snowflake – Understanding of its architecture, data ingestion, and query optimization. (existing data warehouse) AWS Services – Extensive experience with AWS Glue, AWS Lambda, Amazon EMR, Amazon S3, and Apache Airflow for building data pipelines. Python & SQL – Strong programming skills for data transformation and querying. Data Warehousing – Experience in managing existing Snowflake data warehouses and optimizing performance
Posted 1 month ago
3.0 - 8.0 years
5 - 13 Lacs
Chennai, Bengaluru
Work from Office
Job Description Role Snowflake DevOps Engineer Visit our website bmwtechworks.in to know more. Follow us on LinkedIn I Instagram I Facebook I X for the exciting updates. Location Bangalore/ Chennai Experience: 2 to 8 years Number of openings 4 What awaits you/ Job Profile Supporting Snowflakes BMW side Use Case Customers Consulting for Use Case Onboarding Monitoring of Data Synchronization jobs (Reconciliation between Cloud Data Hub and Snowflake) Cost Monitoring Reports to Use Cases Further technical implementations like M2M Authentication for applications, data traffic over VPC Integrate Snowflake to Use Case application process within Data Portal (automated Use Case Setup triggered by Data Portal) Technical Documentation Executing standard service requests (service user lifecycle etc.) Compiling of user and operational manuals Organizing and documenting knowledge regarding incidents/customer cases in a knowledge base Enhancing and editing process documentation Ability and willingness to coach and give training to fellow colleagues and users when Ability to resolve 2nd level incidents within the Data Analytics Platform (could entail basic code changes) Close collaboration with 3rd Level Support/Development and SaaS vendor teams Implementation of new development changes and assist and contribute to the development needs. What should you bring along Strong understanding and experience with Python AWS IAM, S3, KMS, Glue, Cloudwatch. Github Understanding of APIs Understanding of Software Development and background in Business Intelligence SQL (Queries, DDL, Materialized Views, Tasks, Procedures, Optimization) Any Data Portal or Cloud Data Hub Experience A technical background in operating and supporting IT Platforms IT Service Management (according to ITIL) 2nd Level Support Strong understanding of Problem, Incident and Change processes High Customer Orientation Working in a highly complex environment (many stakeholders, multi-platform/product environment, mission-critical use cases, high business exposure, complex ticket routing) Flexible communication on multiple support channels (ITSM, Teams, email) Precise and diligent execution of ops processes Working OnCall (Standby) Mindset of Continuous Learning (highly complex software stack with changing features) Proactive in Communication Must have technical skill Snowflake, Python, Lambda, IAM, S3, KMS, Glue, CloudWatch, Terraform, Scrum Good to have Technical skills AWS VPC, Route53, Bridgeevent, SNS, Cloudtrail, Confluence, Jira
Posted 1 month ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
________________________________________ Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant- Senior Data Engineer - Snowflake, AWS, Cortex AI & Horizon Catalog Role Summary: We are seeking an experienced Senior Data Engineer with deep expertise in modernizing Data & Analytics platforms on Snowflake, leveraging AWS services, Cortex AI, and Horizon Catalog for high-performance, AI-driven data management. The role involves designing scalable data architectures, integrating AI-powered automation, and optimizing data governance, lineage, and analytics frameworks. Key Responsibilities: . Architect & modernize enterprise Data & Analytics platforms on Snowflake, utilizing AWS, Cortex AI, and Horizon Catalog. . Design and optimize Snowflake-based Lakehouse architectures, integrating AWS services (S3, Redshift, Glue, Lambda, EMR, etc.). . Leverage Cortex AI for AI-driven data automation, predictive analytics, and workflow orchestration. . Implement Horizon Catalog for enhanced data lineage, governance, metadata management, and security. . Develop high-performance ETL/ELT pipelines, integrating Snowflake with AWS and AI-powered automation frameworks. . Utilize Snowflake&rsquos native capabilities like Snowpark, Streams, Tasks, and Dynamic Tables for real-time data processing. . Establish data quality automation, lineage tracking, and AI-enhanced data governance strategies. . Collaborate with data scientists, ML engineers, and business stakeholders to drive AI-led data initiatives. . Continuously evaluate emerging AI and cloud-based data engineering technologies to improve efficiency and innovation. Qualifications we seek in you! Minimum Qualifications . experience in Data Engineering, AI-powered automation, and cloud-based analytics. . Expertise in Snowflake (Warehousing, Snowpark, Streams, Tasks, Dynamic Tables). . Strong experience with AWS services (S3, Redshift, Glue, Lambda, EMR). . Deep understanding of Cortex AI for AI-driven data engineering automation. . Proficiency in Horizon Catalog for metadata management, lineage tracking, and data governance. . Advanced knowledge of SQL, Python, and Scala for large-scale data processing. . Experience in modernizing Data & Analytics platforms and migrating on-premises solutions to Snowflake. . Strong expertise in Data Quality, AI-driven Observability, and ModelOps for data workflows. . Familiarity with Vector Databases & Retrieval-Augmented Generation (RAG) architectures for AI-powered analytics. . Excellent leadership, problem-solving, and stakeholder collaboration skills. Preferred Skills: . Experience with Knowledge Graphs (Neo4J, TigerGraph) for structured enterprise data systems. . Exposure to Kubernetes, Terraform, and CI/CD pipelines for scalable cloud deployments. . Background in streaming technologies (Kafka, Kinesis, AWS MSK, Snowflake Snowpipe). Why Join Us . Lead Data & AI platform modernization initiatives using Snowflake, AWS, Cortex AI, and Horizon Catalog. . Work on cutting-edge AI-driven automation for cloud-native data architectures. . Competitive salary, career progression, and an opportunity to shape next-gen AI-powered data solutions. ________________________________________Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
5.0 - 8.0 years
7 - 11 Lacs
Chennai, Malaysia, Malaysia
Work from Office
Responsibilities for Data Engineer Create and maintain the optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing big data data pipelines, architectures and data sets. Experience performing root cause analysis of internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured data sets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including MongoDB, Postgres and Cassandra, AWS Redshift, Snowflake Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, ETL, Glue, RDS, Redshift Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, etc. Knowledge of data pipelines and workflow management tools like Airflo Location: Chenna, India / Kuala Lumpur, Malaysia
Posted 1 month ago
8.0 - 13.0 years
13 - 18 Lacs
Bengaluru
Work from Office
We are seeking a Senior Snowflake Developer/Architect will be responsible for designing, developing, and maintaining scalable data solutions that effectively meet the needs of our organization. The role will serve as a primary point of accountability for the technical implementation of the data flows, repositories and data-centric solutions in your area, translating requirements into efficient implementations. The data repositories, data flows and data-centric solutions you create will support a wide range of reporting, analytics, decision support and (Generative) AI solutions. Your Role: Implement and manage data modelling techniques, including OLTP, OLAP, and Data Vault 2.0 methodologies. Write optimized SQL queries for data extraction, transformation, and loading. Utilize Python for advanced data processing, automation tasks, and system integration. Be an advisor with your In-depth knowledge of Snowflake architecture, features, and best practices. Develop and maintain complex data pipelines and ETL processes in Snowflake. Collaborate with data architects, analysts, and stakeholders to design optimal and scalable data solutions. Automate DBT Jobs & build CI/CD pipelines using Azure DevOps for seamless deployment of data solutions. Ensure data quality, integrity, and compliance throughout the data lifecycle. Troubleshoot, optimize, and enhance existing data processes and queries for performance improvements. Document data models, processes, and workflows clearly for future reference and knowledge sharing. Build Data tests, Unit tests and mock data frameworks. Who You Are: B achelors or masters degree in computer science, mathematics, or related fields. At least 8 years of experience as a data warehouse expert, data engineer or data integration specialist. In depth knowledge of Snowflake components including Security and Governance Proven experience in implementing complex data models (eg. OLTP , OLAP , Data vault) A strong understanding of ETL including end-to-end data flows, from ingestion to data modeling and solution delivery. Proven industry experience with DBT and JINJA scripts Strong proficiency in SQL, with additional knowledge of Python (i.e. pandas and PySpark) being advantageous. Familiarity with data & analytics solutions such as AWS (especially Glue, Lambda, DMS) is nice to have. Experience working with Azure Dev Ops and warehouse automation tools (eg. Coalesce) is a plus. Experience with Healthcare R&D is a plus. Excellent English communication skills, with the ability to effectively engage both with R&D scientists and software engineers. Experience working in virtual and agile teams.
Posted 1 month ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 month ago
9.0 - 14.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -2 (Experience 2-5 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 month ago
6.0 - 11.0 years
25 - 37 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Azure Expertise, Proven experience with Azure Cloud services especially Azure Data Factory, Azure SQL Database & Azure Databricks Expert in PySpark data processing & analytics Strong background in building and optimizing data pipelines and workflows. Required Candidate profile Solid exp with data modeling,ETL processes & data warehousing Performance Tuning Ability to optimize data pipelines & jobs to ensure scalability & performance troubleshooting & resolving performance
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough