Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Proficiency in SQL, Python, and data pipeline frameworks such as Apache Spark, Databricks, or Airflow. Hands-on experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Strong understanding of data modeling, ETL/ELT, and data lake/warehouse/ Datamart architectures. Knowledge on Data Factory or AWS Glue Experience in developing reports and dashboards using tools like Power BI, Tableau, or Looker.
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 month ago
8.0 - 13.0 years
10 - 15 Lacs
Pune
Work from Office
What You'll Do The Global Analytics and Insights (GAI) team is seeking an experienced and experienced Data Visualization Manager to lead our data-driven decision-making initiatives. The ideal candidate will have a background in Power BI, expert-level SQL proficiency, to drive actionable insights and demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects. You will become an expert in Avalara's financial, marketing, sales, and operations data. This position will Report to Senior Manager What Your Responsibilities Will Be You will define and execute the organization's BI strategy, ensuring alignment with business goals. You will Lead, mentor, and manage a team of BI developers and analysts, fostering a continuous learning. You will Develop and implement robust data visualization and reporting solutions using Power BI. You will Optimize data models, dashboards, and reports to provide meaningful insights and support decision-making. You will Collaborate with business leaders, analysts, and cross-functional teams to gather and translate requirements into actionable BI solutions. Be a trusted advisor to business teams, identifying opportunities where BI can drive efficiencies and improvements. You will Ensure data accuracy, consistency, and integrity across multiple data sources. You will Stay updated with the latest advancements in BI tools, SQL performance tuning, and data visualization best practices. You will Define and enforce BI development standards, governance, and documentation best practices. You will work closely with Data Engineering teams to define and maintain scalable data pipelines. You will Drive automation and optimization of reporting processes to improve efficiency. What You'll Need to be Successful 8+ years of experience in Business Intelligence, Data Analytics, or related fields. 5+ Expert proficiency in Power BI, including DAX, Power Query, data modeling, and dashboard creation. 5+ years of strong SQL skills, with experience in writing complex queries, performance tuning, and working with large datasets. Familiarity with cloud-based BI solutions (e.g., Azure Synapse, AWS Redshift, Snowflake) is a plus. Should have understanding of ETL processes and data warehousing concepts. Strong problem-solving, analytical thinking, and decision-making skills.
Posted 1 month ago
6.0 - 10.0 years
6 - 16 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role : AWS Redshift Ops + PLSQL + Unix No of years experience :6+ Detailed Job description - Skill Set: Incident Management Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees Mandatory Skills : AWS Redshift PLSQL Apache Airflow Unix ETL DWH
Posted 1 month ago
3.0 - 6.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Job Title: Cloud Data Warehouse Administrator (DBA) AWS Redshift | Titan Company Limited Company: Titan Company Limited Location: Corporate Office Bengaluru Experience: 3+ years Education: BE / MCA / MSc-IT (from reputed institutions) Job Description Titan Company Limited is looking for a Cloud Data Warehouse Administrator (DBA) to join our growing Digital team in Bengaluru. The ideal candidate will have strong expertise in AWS-based data warehouse solutions with hands-on experience in Redshift (mandatory), RDS, S3, and DynamoDB , along with an eye for performance, scalability, and cost optimization. Key Responsibilities Administer and manage AWS data environments: Redshift, RDS, DynamoDB, S3 Monitor system performance and troubleshoot data-related issues Ensure availability, backup, disaster recovery, and security of databases Design and implement cost-optimized, high-availability solutions Maintain operational documentation and SOPs for all DBA tasks Collaborate with internal and external teams for issue resolution and enhancements Maintain data-level security (row/column level, encryption, masking) Analyze performance and implement improvements proactively Required Skills and Experience 4+ years of experience in a DBA role; 2+ years on AWS cloud (Redshift, RDS, Aurora) Experience in managing cloud database architectures end-to-end Expertise in database performance tuning, replication, and DR strategies Familiarity with Agile working environments and cross-functional collaboration Excellent communication and documentation skills Preferred: AWS/DBA certifications About Titan Company Limited Titan Company Limited, a part of the Tata Group, is one of Indias most admired lifestyle companies. With a strong portfolio in watches, eyewear, jewelry, and accessories, Titan is committed to innovation, quality, and cutting-edge technology through its Digital initiatives. Interested Candidates Kindly share your details on amruthaj@titan.co.in
Posted 1 month ago
12.0 - 15.0 years
14 - 17 Lacs
Bengaluru
Work from Office
Data and ML Platform engineering employs new-age technologies such as Distributed Computing constructs, Real Time model predictions, Deep Learning, Accelerated Compute (GPU); scalable feature stores Cassandra, MySQL, Elastic Search, Solr, Aerospike; scalable programming constructs in, Python and ML Frameworks (TensorFlow, Pytorch, etc). Roles and Responsibilities Drive the data architecture, data modelling, design, and implementation of data applications using standard open source big data tech stack, Data Warehouse / MPP databases and distributed systems. Gather business and functional requirements from external and/or internal users, and translate requirements into technical specifications to build robust, scalable, supportable solutions. Participate and drive the full development lifecycle. Build the Standards and best practices around a Common Data Model and Architecture, Data Governance, Data Quality and Security for multiple business areas across Myntra. Collaborate with platform, product and other engineering and business teams to evangelise those Standards for adoption across the org. Mentor data engineers at various levels of seniority by doing their design and code reviews, providing constructive and timely feedback on code quality, design issues, technology choices with performance and scalability being critical drivers. Manage resources on multiple technical projects and ensure schedules, milestones, and priorities are compatible with technology and business goals. Setting up best practices to help the team achieve the above and constantly thinking about improving the technology use are your responsibilities. Driving the adoption of these best practices around coding, design, quality, performance in your team.Stay abreast of the technology industry, market trends in the field of data architecture and development. Demonstrates understanding of data lifecycle (data modelling, processing, data quality, data evolution) and underlying tech stacks (Hadoop, Spark, MPP). Drives setting data architecture standards encompassing complete data life cycle (ingestion, modelling, processing, consumption, change management, quality, anomaly detection). Challenge the status quo and propose innovative ways to process, model, consume data when it comes to tech stack choices or design principles. Implementation of long term technology vision for your team. Active participant in technology forums; represent Myntra in external forums. Qualifications & Experience 12 - 15 years of experience in software development 5+ years of development and / or DBA experience in Relational Database Management Systems[RDBMS] (MySql, SQLServer, etc.) 8+ years of hands-on experience in implementation and performance tuning MPP databases (Microsoft SQL DW, AWS Redshift, Teradata, Vertica, etc.) Experience designing database environments, analyzing production deployments, and making recommendations to optimize performance Problem solving skills for complex & large scale data applications problems. Technical Breadth Exposure to a wide variety of problem spaces, technologies in data e.g. real-time and batch data processing, options in commercial vs open source tech stack. Hands-on experience with Enterprise Data Warehouse and Big data storage and computation frameworks like OLAP Systems, MPP (SQL DW, Redshift, Oracle RAC, Teradata, Druid), Hadoop Compute (MR, Spark, Flink, Hive). Awareness of pitfalls & use cases for a large variety of solutions. Ability to drive capacity planning, performance optimization and large-scale system integrations. Expertise in designing, implementing, and operating stable, scalable, solutions to flow data from production systems into analytical data platforms (big data tech stack + MPP) and into end-user facing applications for both real-time and batch use cases. Data modelling skills (relational, multi-dimensional) and proficiency in one of the programming languages preferably Java, Scala or Python. Drive design and development of automated monitoring, alerting, self healing (restartability / graceful failures) features while building the consumption pipelines. Mentoring skills Be the technical mentor to your team. B. Tech. or higher in Computer Science or equivalent required.
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Faridabad
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Vadodara
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Varanasi
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Agra
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Surat
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Ludhiana
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Coimbatore
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Jaipur
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Lucknow
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Mysuru
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 2 months ago
3.0 - 8.0 years
10 - 18 Lacs
Chandigarh
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 2 months ago
3.0 - 5.0 years
12 - 13 Lacs
Thane, Navi Mumbai, Pune
Work from Office
We at Acxiom Technologies are hiring for Pyspark Developer for Mumbai Location Relevant Experience : 1 to 4 Years Location : Mumbai Mode of Work : Work From Office Notice Period : Upto 20 days. Job Description: Proven experience as a Pyspark Developer . Hands-on expertise with AWS Redshift . Strong proficiency in Pyspark , Spark , Python , and Hive . Solid experience with SQL . Excellent communication skills. Benefits of working at Acxiom: - Statutory Benefits - Paid Leaves - Phenomenal Career Growth - Exposure to Banking Domain About Acxiom Technologies: Acxiom Technologies is a leading software solutions services company that provides consulting services to global firms and has established itself as one of the most sought-after consulting organizations in the field of Data Management and Business Intelligence. Also here is our website address https://www.acxtech.co.in/ to give you a detailed overview of our company. Interested Candidates can share their resumes on 7977418669 Thank you.
Posted 2 months ago
8 - 13 years
12 - 22 Lacs
Gurugram
Work from Office
Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough