Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Role & responsibilities Mandate skills- Python, AWS, Data Modeler, SQL and Devops (Good to Have not Mandate) Please avoid candidates Qualified from the Universities of Hyderabad & Telangana You Can consider Hyderabad and Telangana Candidates those are working in only tier one companies Job Description- The ideal candidate will have 6 to 8 years of experience in data modelling and architecture with deep expertise in Python, AWS cloud stack , data warehousing , and enterprise data modelling tools . This individual will be responsible for designing and creating enterprise-grade data models and driving the implementation of Layered Scalable Architecture or Medallion Architecture to support robust, scalable, and high-quality data marts across multiple business units. This role will involve managing complex datasets from systems like PoS, ERP, CRM, and external sources, while optimizing performance and cost. You will also provide strategic leadership on data modelling standards, governance, and best practices, ensuring the foundation for analytics and reporting is solid and future ready. Key Responsibilities: Design and deliver conceptual, logical, and physical data models using tools like ERWin . Implement Layered Scalable Architecture / Medallion Architecture for building scalable, standardized data marts. Optimize performance and cost of AWS-based data infrastructure (Redshift, S3, Glue, Lambda, etc.). Collaborate with cross-functional teams (IT, business, analysts) to gather data requirements and ensure model alignment with KPIs and business logic. Develop and optimize SQL code, materialized views, stored procedures in AWS Redshift . Ensure data governance, lineage, and quality mechanisms are established across systems. Lead and mentor technical teams in an Agile project delivery model. Manage data layer creation and documentation: data dictionary, ER diagrams, purpose mapping. Identify data gaps and availability issues with respect to source systems. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, IT, or related field (B.E./B.Tech/M.E./M.Tech/MCA) . Minimum 8 years of experience in data modeling and architecture. Proficiency with data modeling tools such as ERWin , with strong knowledge of forward and reverse engineering . Deep expertise in SQL (including advanced SQL, stored procedures, performance tuning). Strong experience in Python, data warehousing , RDBMS , and ETL tools like AWS Glue , IBM DataStage , or SAP Data Services . Hands-on experience with AWS services : Redshift, S3, Glue, RDS, Lambda, Bedrock, and Q. Good understanding of reporting tools such as Tableau , Power BI , or AWS QuickSight . Exposure to DevOps/CI-CD pipelines , AI/ML , Gen AI , NLP , and polyglot programming is a plus. Familiarity with data governance tools (e.g., ORION/EIIG). Domain knowledge in Retail , Manufacturing , HR , or Finance preferred. Excellent written and verbal communication skills. Certifications (Preferred) Good to have AWS Certification (e.g., AWS Certified Solutions Architect or Data Analytics Specialty ) Data Governance or Data Modelling Certifications (e.g., CDMP , Databricks , or TOGAF ) Mandatory Skills Python, AWS, Technical Architecture, AIML, SQL, Data Warehousing, Data Modelling Preferred candidate profile Share resumes on Sakunthalaa@valorcrest.in
Posted 2 months ago
5.0 - 7.0 years
19 Lacs
Kolkata, Mumbai, Hyderabad
Work from Office
Reporting to Global Head of Data Operations Role purpose As a Data Engineer, you will be a driving force towards data engineering excellence. Working with other data engineers, analysts, and the architecture function, youll be involved in the building out of a modern data platform using a number of cutting-edge technologies, and in a multi cloud environment, Youll get the opportunity to spread your knowledge and skills across multiple areas, with involvement in a range of different functional areas. As the business grows, we want our staff to grow with us, so therell be plenty of opportunity to learn and upskill in areas such as data pipelines, data integrations, data preparation, data models, analytical and reporting marts. Also, whilst work is often following business requirements and design concepts, youll play a huge part in the continuous development and maturing of design patterns and automation process for others to follow. Accountabilities and main responsibilities In this role, you will be delivering solutions and patterns through Agile methodologies as part of a squad. Youll be collaborating with customers, partners and peers, and will help to identify data requirements. Wed also rely on you to: Help break down large problems into smaller iterative steps Contribute to defining the prioritisation of your squads backlog Build out the modern data platform (data pipelines, data integrations, data preparation, data models, analytical and reporting marts) based on business requirements using agreed design patterns Help determine the most appropriate tool, method and design pattern in order to satisfy the requirement Proactively suggest improvements where they see issues Learn how to prepare our data in order to surface it for use within APIs Learn how to Document, support, manage and maintain the modern data platform built within your squad Learn how to provide guidance and training to downstream consumers of data on how best to use the data in our platform Learn how to support and build new data APIs Contribute to evangelising and educating within Sanne about the better use and value of data Comply with all Sanne policies Any other duties in the scope of the role that the company requires. Qualifications and skills Technical Skills: Data Warehousing and Data Modelling Data Lakes (AWS Lake Formation, Azure Data Lake) Cloud Data Warehouses (AWS Redshift, Azure Synapse, Snowflake) ETL/ELT/ Pipeline tools (AWS Glue, Azure Data Factory, FiveTran, Stitch) Data Message Bus/Pub Sub systems (AWS SNS & SQS Azure ASQ, Kafka, RabbitMQ) Data Programming languages (SQL, Python, Scala, Java) Cloud Workflow Service (AWS Step Functions, Azure Logic Apps, Camuda) Interactive Query Services (AWS Athena, Azure DL Analytics) Event and schedule management (AWS Lambda Functions, Azure Functions) Traditional Microsoft BI Stack (SQLServer, SSIS, SSAS, SSRS) Reporting and visualisation tools (Power BI, QuickSight, Mode) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) (Desirable) API Management (Desirable) Core Skills: Excellent communication and interpersonal skills Critical Thinking and research capabilities Strong problem-solving skills Ability to plan, and manage your own work loads Work well on own initiative as well as part of a bigger team Working knowledge of Agile Software Development Lifecycles.
Posted 2 months ago
4.0 - 6.0 years
11 - 12 Lacs
Mumbai
Work from Office
Notice Period: Immediate iSource Services is hiring for one of their client for the position of Tableau developer About the Role - We are seeking an experienced Tableau Developer with 4+ years of experience to work in Mumbai. The candidate should have a strong background in data visualization, analytics, and business intelligence to drive insights for the organization. Responsibilities: Develop interactive Tableau dashboards and reports based on business requirements. Connect, clean, and transform data from multiple sources for visualization. Optimize dashboards for performance and usability. Work closely with business and technical teams to gather requirements. Implement best practices for data visualization and storytelling. Automate data refreshes and ensure data accuracy. Collaborate with data engineers and analysts for efficient data modeling. Requirements: 4+ years of experience in Tableau development and data visualization. Proficiency in SQL, data modeling, and ETL processes. Experience with data sources like SQL Server, Snowflake, or AWS Redshift. Strong understanding of data warehousing concepts. Ability to analyze and interpret complex data sets. Experience in Python/R (preferred but not mandatory). Excellent communication and stakeholder management skills.
Posted 2 months ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Proficiency in SQL, Python, and data pipeline frameworks such as Apache Spark, Databricks, or Airflow. Hands-on experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Strong understanding of data modeling, ETL/ELT, and data lake/warehouse/ Datamart architectures. Knowledge on Data Factory or AWS Glue Experience in developing reports and dashboards using tools like Power BI, Tableau, or Looker.
Posted 3 months ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 3 months ago
8.0 - 13.0 years
10 - 15 Lacs
Pune
Work from Office
What You'll Do The Global Analytics and Insights (GAI) team is seeking an experienced and experienced Data Visualization Manager to lead our data-driven decision-making initiatives. The ideal candidate will have a background in Power BI, expert-level SQL proficiency, to drive actionable insights and demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects. You will become an expert in Avalara's financial, marketing, sales, and operations data. This position will Report to Senior Manager What Your Responsibilities Will Be You will define and execute the organization's BI strategy, ensuring alignment with business goals. You will Lead, mentor, and manage a team of BI developers and analysts, fostering a continuous learning. You will Develop and implement robust data visualization and reporting solutions using Power BI. You will Optimize data models, dashboards, and reports to provide meaningful insights and support decision-making. You will Collaborate with business leaders, analysts, and cross-functional teams to gather and translate requirements into actionable BI solutions. Be a trusted advisor to business teams, identifying opportunities where BI can drive efficiencies and improvements. You will Ensure data accuracy, consistency, and integrity across multiple data sources. You will Stay updated with the latest advancements in BI tools, SQL performance tuning, and data visualization best practices. You will Define and enforce BI development standards, governance, and documentation best practices. You will work closely with Data Engineering teams to define and maintain scalable data pipelines. You will Drive automation and optimization of reporting processes to improve efficiency. What You'll Need to be Successful 8+ years of experience in Business Intelligence, Data Analytics, or related fields. 5+ Expert proficiency in Power BI, including DAX, Power Query, data modeling, and dashboard creation. 5+ years of strong SQL skills, with experience in writing complex queries, performance tuning, and working with large datasets. Familiarity with cloud-based BI solutions (e.g., Azure Synapse, AWS Redshift, Snowflake) is a plus. Should have understanding of ETL processes and data warehousing concepts. Strong problem-solving, analytical thinking, and decision-making skills.
Posted 3 months ago
6.0 - 10.0 years
6 - 16 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role : AWS Redshift Ops + PLSQL + Unix No of years experience :6+ Detailed Job description - Skill Set: Incident Management Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees Mandatory Skills : AWS Redshift PLSQL Apache Airflow Unix ETL DWH
Posted 3 months ago
3.0 - 6.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Job Title: Cloud Data Warehouse Administrator (DBA) AWS Redshift | Titan Company Limited Company: Titan Company Limited Location: Corporate Office Bengaluru Experience: 3+ years Education: BE / MCA / MSc-IT (from reputed institutions) Job Description Titan Company Limited is looking for a Cloud Data Warehouse Administrator (DBA) to join our growing Digital team in Bengaluru. The ideal candidate will have strong expertise in AWS-based data warehouse solutions with hands-on experience in Redshift (mandatory), RDS, S3, and DynamoDB , along with an eye for performance, scalability, and cost optimization. Key Responsibilities Administer and manage AWS data environments: Redshift, RDS, DynamoDB, S3 Monitor system performance and troubleshoot data-related issues Ensure availability, backup, disaster recovery, and security of databases Design and implement cost-optimized, high-availability solutions Maintain operational documentation and SOPs for all DBA tasks Collaborate with internal and external teams for issue resolution and enhancements Maintain data-level security (row/column level, encryption, masking) Analyze performance and implement improvements proactively Required Skills and Experience 4+ years of experience in a DBA role; 2+ years on AWS cloud (Redshift, RDS, Aurora) Experience in managing cloud database architectures end-to-end Expertise in database performance tuning, replication, and DR strategies Familiarity with Agile working environments and cross-functional collaboration Excellent communication and documentation skills Preferred: AWS/DBA certifications About Titan Company Limited Titan Company Limited, a part of the Tata Group, is one of Indias most admired lifestyle companies. With a strong portfolio in watches, eyewear, jewelry, and accessories, Titan is committed to innovation, quality, and cutting-edge technology through its Digital initiatives. Interested Candidates Kindly share your details on amruthaj@titan.co.in
Posted 3 months ago
12.0 - 15.0 years
14 - 17 Lacs
Bengaluru
Work from Office
Data and ML Platform engineering employs new-age technologies such as Distributed Computing constructs, Real Time model predictions, Deep Learning, Accelerated Compute (GPU); scalable feature stores Cassandra, MySQL, Elastic Search, Solr, Aerospike; scalable programming constructs in, Python and ML Frameworks (TensorFlow, Pytorch, etc). Roles and Responsibilities Drive the data architecture, data modelling, design, and implementation of data applications using standard open source big data tech stack, Data Warehouse / MPP databases and distributed systems. Gather business and functional requirements from external and/or internal users, and translate requirements into technical specifications to build robust, scalable, supportable solutions. Participate and drive the full development lifecycle. Build the Standards and best practices around a Common Data Model and Architecture, Data Governance, Data Quality and Security for multiple business areas across Myntra. Collaborate with platform, product and other engineering and business teams to evangelise those Standards for adoption across the org. Mentor data engineers at various levels of seniority by doing their design and code reviews, providing constructive and timely feedback on code quality, design issues, technology choices with performance and scalability being critical drivers. Manage resources on multiple technical projects and ensure schedules, milestones, and priorities are compatible with technology and business goals. Setting up best practices to help the team achieve the above and constantly thinking about improving the technology use are your responsibilities. Driving the adoption of these best practices around coding, design, quality, performance in your team.Stay abreast of the technology industry, market trends in the field of data architecture and development. Demonstrates understanding of data lifecycle (data modelling, processing, data quality, data evolution) and underlying tech stacks (Hadoop, Spark, MPP). Drives setting data architecture standards encompassing complete data life cycle (ingestion, modelling, processing, consumption, change management, quality, anomaly detection). Challenge the status quo and propose innovative ways to process, model, consume data when it comes to tech stack choices or design principles. Implementation of long term technology vision for your team. Active participant in technology forums; represent Myntra in external forums. Qualifications & Experience 12 - 15 years of experience in software development 5+ years of development and / or DBA experience in Relational Database Management Systems[RDBMS] (MySql, SQLServer, etc.) 8+ years of hands-on experience in implementation and performance tuning MPP databases (Microsoft SQL DW, AWS Redshift, Teradata, Vertica, etc.) Experience designing database environments, analyzing production deployments, and making recommendations to optimize performance Problem solving skills for complex & large scale data applications problems. Technical Breadth Exposure to a wide variety of problem spaces, technologies in data e.g. real-time and batch data processing, options in commercial vs open source tech stack. Hands-on experience with Enterprise Data Warehouse and Big data storage and computation frameworks like OLAP Systems, MPP (SQL DW, Redshift, Oracle RAC, Teradata, Druid), Hadoop Compute (MR, Spark, Flink, Hive). Awareness of pitfalls & use cases for a large variety of solutions. Ability to drive capacity planning, performance optimization and large-scale system integrations. Expertise in designing, implementing, and operating stable, scalable, solutions to flow data from production systems into analytical data platforms (big data tech stack + MPP) and into end-user facing applications for both real-time and batch use cases. Data modelling skills (relational, multi-dimensional) and proficiency in one of the programming languages preferably Java, Scala or Python. Drive design and development of automated monitoring, alerting, self healing (restartability / graceful failures) features while building the consumption pipelines. Mentoring skills Be the technical mentor to your team. B. Tech. or higher in Computer Science or equivalent required.
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Faridabad
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Vadodara
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Varanasi
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Agra
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Surat
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Ludhiana
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Coimbatore
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Jaipur
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Lucknow
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Mysuru
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 3 months ago
3.0 - 8.0 years
10 - 18 Lacs
Chandigarh
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.
Posted 3 months ago
3.0 - 5.0 years
12 - 13 Lacs
Thane, Navi Mumbai, Pune
Work from Office
We at Acxiom Technologies are hiring for Pyspark Developer for Mumbai Location Relevant Experience : 1 to 4 Years Location : Mumbai Mode of Work : Work From Office Notice Period : Upto 20 days. Job Description: Proven experience as a Pyspark Developer . Hands-on expertise with AWS Redshift . Strong proficiency in Pyspark , Spark , Python , and Hive . Solid experience with SQL . Excellent communication skills. Benefits of working at Acxiom: - Statutory Benefits - Paid Leaves - Phenomenal Career Growth - Exposure to Banking Domain About Acxiom Technologies: Acxiom Technologies is a leading software solutions services company that provides consulting services to global firms and has established itself as one of the most sought-after consulting organizations in the field of Data Management and Business Intelligence. Also here is our website address https://www.acxtech.co.in/ to give you a detailed overview of our company. Interested Candidates can share their resumes on 7977418669 Thank you.
Posted 3 months ago
8 - 13 years
12 - 22 Lacs
Gurugram
Work from Office
Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities
Posted 4 months ago
12.0 - 15.0 years
14 - 17 Lacs
bengaluru
Work from Office
Data and ML Platform engineering employs new-age technologies such as Distributed Computing constructs, Real Time model predictions, Deep Learning, Accelerated Compute (GPU); scalable feature stores Cassandra, MySQL, Elastic Search, Solr, Aerospike; scalable programming constructs in, Python and ML Frameworks (TensorFlow, Pytorch, etc). Roles and Responsibilities Drive the data architecture, data modelling, design, and implementation of data applications using standard open source big data tech stack, Data Warehouse / MPP databases and distributed systems. Gather business and functional requirements from external and/or internal users, and translate requirements into technical specifications to build robust, scalable, supportable solutions. Participate and drive the full development lifecycle. Build the Standards and best practices around a Common Data Model and Architecture, Data Governance, Data Quality and Security for multiple business areas across Myntra. Collaborate with platform, product and other engineering and business teams to evangelise those Standards for adoption across the org. Mentor data engineers at various levels of seniority by doing their design and code reviews, providing constructive and timely feedback on code quality, design issues, technology choices with performance and scalability being critical drivers. Manage resources on multiple technical projects and ensure schedules, milestones, and priorities are compatible with technology and business goals. Setting up best practices to help the team achieve the above and constantly thinking about improving the technology use are your responsibilities. Driving the adoption of these best practices around coding, design, quality, performance in your team.Stay abreast of the technology industry, market trends in the field of data architecture and development. Demonstrates understanding of data lifecycle (data modelling, processing, data quality, data evolution) and underlying tech stacks (Hadoop, Spark, MPP). Drives setting data architecture standards encompassing complete data life cycle (ingestion, modelling, processing, consumption, change management, quality, anomaly detection). Challenge the status quo and propose innovative ways to process, model, consume data when it comes to tech stack choices or design principles. Implementation of long term technology vision for your team. Active participant in technology forums; represent Myntra in external forums. Qualifications & Experience 12 - 15 years of experience in software development 5+ years of development and / or DBA experience in Relational Database Management Systems[RDBMS] (MySql, SQLServer, etc.) 8+ years of hands-on experience in implementation and performance tuning MPP databases (Microsoft SQL DW, AWS Redshift, Teradata, Vertica, etc.) Experience designing database environments, analyzing production deployments, and making recommendations to optimize performance Problem solving skills for complex & large scale data applications problems. Technical Breadth Exposure to a wide variety of problem spaces, technologies in data e.g. real-time and batch data processing, options in commercial vs open source tech stack. Hands-on experience with Enterprise Data Warehouse and Big data storage and computation frameworks like OLAP Systems, MPP (SQL DW, Redshift, Oracle RAC, Teradata, Druid), Hadoop Compute (MR, Spark, Flink, Hive). Awareness of pitfalls & use cases for a large variety of solutions. Ability to drive capacity planning, performance optimization and large-scale system integrations. Expertise in designing, implementing, and operating stable, scalable, solutions to flow data from production systems into analytical data platforms (big data tech stack + MPP) and into end-user facing applications for both real-time and batch use cases. Data modelling skills (relational, multi-dimensional) and proficiency in one of the programming languages preferably Java, Scala or Python. Drive design and development of automated monitoring, alerting, self healing (restartability / graceful failures) features while building the consumption pipelines. Mentoring skills Be the technical mentor to your team. B. Tech. or higher in Computer Science or equivalent required.
Posted Date not available
5.0 - 10.0 years
15 - 25 Lacs
kolkata
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted Date not available
5.0 - 10.0 years
15 - 25 Lacs
kolkata
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |