Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 8 years
7 - 10 Lacs
Hyderabad
Work from Office
Responsibilities: Work with business and technical leadership to understand requirements Design to the requirements and document the designs Ability to write product-grade performant code for data extraction, transformations and loading using Spark, Py-Spark Do data modeling as needed for the requirements Write performant queries using Teradata SQL, and Spark SQL against Teradata and Implementing dev-ops pipelines to deploy code artifacts on to the designated platform/servers like AWS(pref) or Implement Hadoop job orchestration using Shell scripting, CA7 Enterprise Scheduler and Airflow Troubleshooting the issues, providing effective solutions and jobs monitoring in the production environment Participate in sprint planning sessions, refinement/story-grooming sessions, daily scrums, demos and retrospectives Experience Required: Overall 5 - 8 Years Experience Desired: Experience in Jira and Confluence Health care domain knowledge is a plus Excellent work experience on Databricks OR Teradata as data warehouse Experience in Agile and working knowledge on DevOps tools Education and Training Required: Primary Skills: Spark, Py-Spark, Shell scripting, Teradata SQLs (using Teradata SQLand Spark SQL) and Stored Procedures Git, Jenkins, Artifactory CA7 Enterprise Scheduler, Airflow AWS data services(S3, EC2, SQS) AWS Services- SNS Lambda, ECS, Glue, IAM CloudWatch Monitoring tool Databricks (Delta lake, Notebooks, Pipelines, cluster management, Azure/AWS integration) Good to have: Unix / Linux Shell scripting (KSH) and basic administration of Unix servers Additional Skills: Exercises considerable creativity, foresight, and judgment in conceiving, planning, and delivering initiatives.
Posted 2 months ago
8 - 12 years
30 - 35 Lacs
Hyderabad
Work from Office
Position Summary: The Health Services Data Design and Metadata Management team is hiring an Architecture Senior Advisor to work across all projects. The work involves understanding and driving data design best practices, including data modeling, mapping, and analysis, and helping others to apply them across strategic data assets. The data models are wide-ranging and must include the appropriate metadata to support and improve our data intelligence. Data design centers around standard health care data (eligibility, claim, clinical, and provider data) across structured and unstructured data platforms. Job Description & Responsibilities: Perform data analysis, data modeling, and data mapping following industry and Evernorth data design standards for analytics/data warehouses and operational data stores across various DBMS types, including Teradata, Oracle, Cloud, and Hadoop, Databricks and datalake. Perform data analysis, profiling and validation, contributing to data quality efforts to understand data characteristics and ensure data correctness/condition for use. Participate in and coordinate data model metadata development processes to support ongoing development efforts (data dictionary, NSM, and FET files), maintenance of data model/data mapping metadata, and linking of our data design metadata to business terms, data models, mapping documents, ETL jobs, and data model governance operations (policies, standards, best practices). Facilitate and actively participate in data model/data mapping reviews and audits, fostering collaborative working relationships and partnerships with multidisciplinary teams. Provide guidance, mentoring, and training as needed in data modeling, data lineage, ddl code, and the associated toolsets (Erwin Data Modeler, Erwin Web Portal, Erwin model mart, Erwin Data Intelligence Suite, Alation). Assist with the creation, documentation, and maintenance of Evernorth data design standards and best practices involving data modeling, data mapping, and metadata capture including data sensitivity, data quality rules, and reference data usage. Develop and facilitate strong partnerships and working relationships with Data Governance, delivery, and other data partners. Continuously improve operational processes for data design metadata management for global and strategic data. Interact with Business stakeholders and IT in defining and managing data design. Coordination, collaboration, and innovation with Solution Verticals, Data Lake teams, IT & Business Portfolios to ensure alignment of data design metadata and related information with ongoing programs (cyber risk and security) and development efforts. Experience Required: 8+ years' experience with data modeling (logical/physical data model, canonical structures, etc.) and SQL code; Experience Desired: Subject matter expertise level experience preferred Experience executing data model/data lineage governance across business and technical data. Experience utilizing data model/lineage/mapping/analysis management tools for business, technical and operational metadata (Erwin Data Modeler, Erwin Web Portal, Erwin Model Mart, Erwin Data Intelligence Suite, Alation)Experience working in an Agile delivery environment (Jira, Confluence, SharePoint, Git, etc.) Education and Training Required: Advanced degree in Computer Science or a related discipline and at least six, typically eight or more years experience in all phases of data modeling, data warehousing, data mining, data entity analysis, logical data base design and relational data base definition, or an equivalent combination of education and work experience. Primary Skills: Physical data modeling, data warehousing, metadata, reference data, data mapping, Data mining, teradata, data quality, excellent communication skills, data analysis, oracle, Data governance, database management system, jira, ddl, data integration, microsoft , sharepoint, database modeling, confluence, agile, marketing analysis, sharepoint, operations, topo, data lineage, data warehouses, documentation Big data, web portal, maintenance, erwin, sql, unstructured data, audit, git, pharmacy Dbms, databricks, aws.
Posted 2 months ago
8 - 13 years
30 - 35 Lacs
Bengaluru
Work from Office
Works with business stakeholders to model their data landscape, obtains data extracts and defines secure data exchange approaches. Explores suitable options, designs, and creates high performance data pipelines (data lake / data warehouses) for specific analytical solutions Roles Responsibilities: Responsible for the design, development, and implementation of the ETL jobs for ingestion and transformation areas of the data pipeline Works closely with Data Scientists to ensure data quality and availability for analytical modelling Collaborates with Data Scientists to map data fields to hypotheses and curate, wrangle, and prepare data for use in their advanced analytical models. Acquires, ingests, and processes data from multiple sources and systems into Big Data platforms Understanding, assessing and mapping the data landscape. Maintaining our Information Security standards Defining the technology stack to be provisioned by our infrastructure team. Building modular pipeline to construct features and modelling tables. Experience Education: Bachelors degree in computer science, BCA / MCA; Masters degree preferred Experience in data pipeline software engineering and implementing best practice in python - linting, unit tests, integration tests, git flow\/pull request process, object oriented development, data validation, algorithms and data structures, technical troubleshooting and debugging, bash scripting. Experience preparing data for analytics and following a data science workflow Experience with analytics (descriptive, predictive, EDA), feature engineering and python visualization libraries e.g., matplotlib, seaborn or other Comfortable with notebook and source code development Jupyter, Pycharm\/VScode Ability to work under pressure with a solid sense for setting priorities Excellent interpersonal, leadership and communication skills Strong command of English language (both verbal and written) Experienced on: Big Data / Database platforms like Hadoop, hbase, CouchDB, hive, Pig, TeraData, Oracle, MySQL etc Visualization tools like Tableu, QlikView or similar reporting and BI packages CDC tools (Qlik Replicate preferred) Used SQL, PL\/SQL and similar languages, UNIX shell scripting
Posted 2 months ago
6 - 11 years
17 - 30 Lacs
Delhi NCR, Bengaluru, Hyderabad
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Microsoft ETL Lead Engineer | Database design |Agile Development Process | Release Management)! Position Overview: We are currently seeking a highly experienced ETL engineer with hands-on experience in Microsoft ETL (Extract, Transform, Load) technologies. The ideal candidate will have a deep understanding of ETL processes, data warehousing, and data integration, with a proven track record of leading successful ETL implementations. As an principal ETL engineer, you will play a pivotal role in architecting, designing, and implementing ETL solutions to meet our organization's data needs. Key Responsibilities: • Lead and design and development of ETL processes using Microsoft ETL technologies, such as SSIS (SQL Server Integration Services) • Mandatory hands on experience (70% development, 30% leadership) • Collaborate with stakeholders to gather and analyze requirements for data integration and transformation. • Design and implement data quality checks and error handling mechanisms within ETL processes. • Lead a team of ETL developers, providing technical guidance, mentorship, and oversight. • Perform code reviews and ensure adherence to best practices and coding standards. • Troubleshoot and resolve issues related to data integration, ETL performance, and data quality. • Work closely with database administrators, data architects, and business analysts to ensure alignment of ETL solutions with business requirements. • Stay up-to-date with the latest trends and advancements in ETL technologies and best practices • Identify and resolve performance bottlenecks.Implement best practices for database performance tuning and optimization. • Ensure data integrity, security, and availability • Create and maintain documentation for database designs, configurations, and procedures • Ensure compliance with data privacy and security regulations Qualifications: Education: • Bachelor’s degree in Computer Science, Information Technology, or a related field. Experience: • experience in designing, developing, and implementing ETL solutions using Microsoft ETL technologies, particularly SSIS. • Strong understanding of data warehousing concepts, dimensional modeling, and ETL design patterns. • Proficiency in SQL and experience working with relational databases, preferably Microsoft SQL Server. • Experience leading ETL development teams and managing end-to-end ETL projects. • Proven track record of delivering high-quality ETL solutions on time and within budget. • Experience with other Microsoft data platform technologies (e.g., SSAS, SSRS) is a plus • Familiarity with version control systems (e.g., Git) • Knowledge of containerization and orchestration (e.g., Docker, Kubernetes) is a plus Soft Skills: o Strong analytical and problem-solving skills o Excellent communication and collaboration abilities o Ability to work independently and as part of a team Preferred Qualifications o Experience with cloud-based database services (e.g., AWS RDS, Google Cloud SQL) o Knowledge of other database systems (e.g., PGSQL, Oracle) o Familiarity with Agile development methodologies Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 months ago
4 - 8 years
17 - 19 Lacs
Bengaluru
Work from Office
As a Campaign Analyst, you will: Design and deliver a program of communications including Marketing, Service and Compliance campaigns using MarTech platforms such as HCL Unica Ensure efficient and effective customer contact through campaign execution, including channel specific rules, key customer segmentation and standard inclusion and exclusion requirements Build data flows for single communications or orchestration of customer journeys Work closely with the 1:1 Marketing and Execution Channel Teams to deliver end-to-end customer communications Be a key member of the Campaign Analytics team, supporting each other with peer reviews and knowledge sharing Develop strong ability to translate data insights into practical business recommendations and communicate with stakeholders Use SQL coding to Query databases Ensure a high standard is applied in the design and development of campaigns, focusing on first-time Quality Assurance pass rate Understand the legal, regulatory and policy requirements for usage of customer data, including customer opt-outs, consents & preferences, and privacy considerations What will you bring To grow and be successful in this role, you will ideally bring the following: 4 plus years experience in campaign tools (HCL Unica or similar) Experience in executing Marketing and Compliance Campaigns Ability to effectively manage and communicate to all stakeholders (technical and non-technical) Knowledge of tools and techniques for analysis, data manipulation and presentation (e.g. SQL, Excel) Understanding of Data warehousing or databases (e.g. Teradata, Oracle) Analytical and inquisitive mindset to continuously uncover campaign optimisation opportunities Strong ability to translate data insights into practical business recommendations Problem-solving capabilities.
Posted 2 months ago
7 - 12 years
30 - 40 Lacs
Bengaluru
Hybrid
Good in: Data Modelling, Python, ETL, (Big Data/ MPP), Airflow, Kafka, (Presto/ Spark), (Power BI/ Tableau) Good in: (Teradata/ AWS Redshift/ Google BigQuery/ Azure Synapse Analytics) 79 Yrs Old Reputed MNC Company Required Candidate profile Good in: Data Warehouse Design and Dimensional Modeling
Posted 2 months ago
3 - 5 years
5 - 8 Lacs
Hyderabad
Work from Office
Position Summary: Evernorth, a leading Health Services company, is looking for exceptional data engineers/developers in our Data and Analytics organization. In this role, you will actively participate with your development team on initiatives that support Evernorth's strategic goals as well as subject matter experts to understand business logic you will be engineering. As a software engineer, you will help develop an integrated architectural strategy to support next-generation reporting and analytical capabilities on an enterprise-wide scale. You will work in an agile environment, delivering user-oriented products which will be available both internally and externally by our customers, clients, and providers. Candidates will be provided the opportunity to work on a range of technologies and data manipulation concepts. Specifically, this may include developing healthcare data structures and data transformation logic to allow for analytics and reporting for customer journeys, personalization opportunities, pre-active actions, text mining, action prediction, fraud detection, text/sentiment classification, collaborative filtering/recommendation, and/or signal detection. This position will involve taking these skills and applying them to some of the most exciting and massive health data opportunities that exist here at Evernorth. The ideal candidate will work in a team environment that demands technical excellence, whose members are expected to hold each other accountable for the overall success of the end product. Focus for this team is on the delivery of innovative solutions to complex problems, but also with a mind to drive simplicity in refining and supporting of the solution by others. Job Description & Responsibilities: Be accountable for delivery of business functionality. Work on the AWS cloud to migrate/re-engineer data and applications from on premise to cloud. Responsible for engineering solutions conformant to enterprise standards, architecture, and technologies Provide technical expertise through a hands-on approach, developing solutions that automate testing between systems. Perform peer code reviews, merge requests and production releases. Implement design/functionality using Agile principles. Proven track record of quality software development and an ability to innovate outside of traditional architecture/software patterns when needed. A desire to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. Be entrepreneurial, business minded, ask smart questions, take risks, and champion new ideas. Take ownership and accountability. Experience Required: 3 to 5 years of experience in application program development Experience Desired: Knowledge and/or experience with healthcare information domains. Documented experience in a business intelligence or analytic development role on a variety of large-scale projects. Documented experience working with databases larger than 5TB and excellent data analysis skills. Experience with TDD/BDD Experience working with SPARK and real time analytic frameworks Education and Training Required: Bachelors degree in Engineering, Computer Science Primary Skills: PYTHON, Databricks, TERADATA, SQL, UNIX, ETL, Data Structures, Looker, Tableau, GIT, Jenkins, RESTful & GraphQL APIs. AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM Additional Skills: Ability to rapidly prototype and storyboard/wireframe development as part of application design. Write referenceable and modular code. Willingness to continuously learn & share learnings with others. Ability to communicate design processes, ideas, and solutions clearly and effectively to teams and clients. Ability to manipulate and transform large datasets efficiently. Excellent troubleshooting skills to root cause complex issues
Posted 2 months ago
15 - 20 years
11 - 15 Lacs
Mumbai
Work from Office
More than 15 years of experience in Technical, Solutioning, and Analytical roles. 5+ years of experience in building and managing Data Lakes, Data Warehouses, Data Integration, Data Migration, and Busines Intelligence/Artificial Intelligence solutions in the Cloud (GCP/AWS/Azure) Ability to understand business requirements, translate them into functional and non-functional areas, and define non-functional boundaries in terms of Availability, Scalability, Performance, Security, Resilience, etc. Experience in architecting, designing, and implementing end-to-end data pipelines and data integration solutions for varied structured and unstructured data sources and targets. Experience of having worked in distributed computing and enterprise environments like Hadoop, GCP/AWS/Azure Cloud. Well-versed with various Data Integration, and ETL technologies in Cloud like Spark, Pyspark/Scala, Dataflow, DataProc, EMR, etc. on various Cloud. Experience of having worked with traditional ETL tools like Informatica / DataStage / OWB / Talend , etc. Deep knowledge of one or more Cloud and On-Premise Databases like Cloud SQL, Cloud Spanner, Big Table, RDS, Aurora, DynamoDB, Oracle, Teradata, MySQL, DB2, SQL Server, etc. Exposure to any of the No-SQL databases like Mongo dB, CouchDB, Cassandra, Graph dB, etc. Experience in architecting and designing scalable data warehouse solutions on the cloud on Big Query or Redshift. Experience in having worked on one or more data integration, storage, and data pipeline toolsets like S3, Cloud Storage, Athena, Glue, Sqoop, Flume, Hive, Kafka, Pub-Sub, Kinesis, Dataflow, DataProc, Airflow, Composer, Spark SQL, Presto, EMRFS, etc. Preferred experience of having worked on Machine Learning Frameworks like TensorFlow, PyTorch, etc. Good understanding of Cloud solutions for Iaas, PaaS, SaaS, Containers, and Microservices Architecture and Design. Ability to compare products and tools across technology stacks on Google, AWS, and Azure Cloud. Good understanding of BI Reposting and Dashboarding and one or more toolsets associated with it like Looker, Tableau, Power BI, SAP BO, Cognos, Superset, etc. Understanding of Security features and Policies in one or more Cloud environments like GCP/AWS/Azure. Experience of having worked in business transformation projects for the movement of On-Premise data solutions to Clouds like GCP/AWS/Azure. Role: Lead multiple data engagements on GCP Cloud for data lakes, data engineering, data migration, data warehouse, and business intelligence. Interface with multiple stakeholders within IT and business to understand the data requirements. Take complete responsibility for the successful delivery of all allocated projects on the parameters of Schedule, Quality, and Customer Satisfaction. Responsible for design and development of distributed, high volume multi-thread batch, real-time, and event processing systems. Implement processes and systems to validate data, and monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Work with the Pre-Sales team on RFP, and RFIs and help them by creating solutions for data. Mentor Young Talent within the Team, Define and track their growth parameters. Contribute to building Assets and Accelerators. Other Skills: Strong Communication and Articulation Skills. Good Leadership Skills. Should be a good team player. Good Analytical and Problem-solving skills.
Posted 2 months ago
6 - 8 years
14 - 16 Lacs
Chennai, Pune, Delhi
Work from Office
Entity-Relationship Diagrams (ERD) Dimensional Modelling (Star Schema, Snowflake Schema) Database Management Systems (DBMS) PostgreSQL Data Warehousing Solutions Amazon Redshift Snowflake Teradata Programming and Scripting Languages SQL Python Java Shell Scripting AWS (Amazon Web Services) Cloud Platforms Data Visualization Tools Tableau Power BI Version Control and CI/CD Git/GitHub Jenkins Docker Kubernetes Key Responsibilities: 1. Design and Development: o Design and implement data warehouse architecture, including data models, schemas, and ETL processes. o Develop and maintain ETL scripts and workflows to extract, transform, and load data from various sources into the data warehouse. o Create and optimize database queries, stored procedures, and views. 2. Data Integration: o Integrate data from multiple heterogeneous sources, including databases, flat files, APIs, and external data feeds. o Ensure data consistency, quality, and integrity throughout the ETL process. 3. Performance Optimization: o Monitor and tune the performance of data warehouse components. o Implement indexing, partitioning, and other optimization techniques to improve query performance. 4. Data Quality and Governance: o Establish and enforce data quality standards and governance policies. o Perform data profiling, validation, and cleansing activities. 5. Collaboration and Support: o Work closely with business analysts, data scientists, and other stakeholders to understand data requirements and deliver appropriate solutions. o Provide support and troubleshooting for data warehouse-related issues. 6. Documentation and Reporting: o Document data models, ETL processes, and data flows. o Generate and maintain reports, dashboards, and data visualizations as required. Added Advantage: Knowledge in mobile development. Domain knowledge in Airline PSS. Experience in onsite implementation of projects. Experience in report development tools like Power BI and Tableau.
Posted 2 months ago
6 - 10 years
8 - 12 Lacs
Karnataka
Work from Office
Technical Skills 6+ years of hands on Working experience as a full time data modeler. should have experience in designing symantic data models. Working experience with Relational Database Concepts. Strong skills in database design for Teradata DB2 SQL Server. Experience with data modeling tools (PowerDesigner tool). Understanding of metadata tools (Metacenter/ Collibra). Understanding the concept of Hadoop and NoSQL databases. Familiarization with Data Warehouse/Business Intelligence concepts. Analytic Skills Must have the ability to develop and maintain deliverables to schedule ability to adapt to changes in requirements and priorities. Must be a self-starter show initiative and have the ability to work on multiple projects at the same time - prioritizing as appropriate. Liaise with business analysts internal and external application development teams DBA data warehouse and QA groups. Must be able to work independently as part of a global team.
Posted 2 months ago
2 - 7 years
2 - 5 Lacs
Hyderabad
Work from Office
Work with a fast-growing and innovative sales solutions company Hands-on experience in business intelligence and sales analytics Opportunity to work with top-tier clients and industry leaders Responsibilities Sales Data Management & Reporting Transform raw sales data into valuable business insights using BI tools (Tableau, Power BI, etc.). Develop and deploy robust reporting dashboards for tracking performance metrics. Manage ETL processes (Extract, Transform, Load) to streamline data flow. Analyze large datasets to identify market trends and business growth opportunities. Business Intelligence & Analytics Develop data-driven strategies to optimize sales performance. Build predictive models to forecast sales trends and customer behavior. Conduct deep-dive analysis on business performance and suggest data-backed improvements. Work closely with stakeholders to understand their requirements and provide customized analytical solutions . Client & Team Management Act as the primary liaison between business and technical teams . Gather and analyze business requirements to enhance operational efficiency. Provide strategic recommendations to clients and internal teams based on data insights. Qualifications What We Expect from You: Educational Background: Tech / BE / BCA / BSc in Computer Science, Engineering, or a related field. Experience: Relevant: 2+ years as a Business Analyst focusing on sales reporting & analytics Job Benefits Must-Have Skills: Strong expertise in BI tools (Tableau, Power BI, Oracle BI) Hands-on experience in ETL processes (Informatica, Talend, Teradata, Jasper, etc.) Solid understanding of data modeling, data analytics, and business reporting Excellent client management & stakeholder communication skills Strong analytical and problem-solving mindset Bonus Skills (Preferred but Not Mandatory): Experience in sales process automation & CRM analytics Exposure to AI & Machine Learning in sales analytics
Posted 2 months ago
6 - 9 years
18 - 22 Lacs
Trichy, Chennai, Madurai
Work from Office
Bachelor s / Master s degree in a quantitative field (e.g. Computer Science, Statistics, Engineering) 6+ years of relevant experience in Data Analytics or Data Engineering 5+ years of hands-on experience in SQL NoSQL working large datasets. Hands-on application development experience in AWS and experience in Snowflake, tableau or power BI. Good working knowledge on any programming language like Javascript or python. Proficiency with REST API, JSON, AWS Experience in working and delivering projects independently. Ability to multi-task and context switch between projects and tasks. Curiosity, passion, and drive for data queries, analysis, quality, models. Excellent communication, initiative, and coordination skills with great attention to detail. Ability to explain and discuss complex topics with both experts and business leaders Application development background of using any contact center product suites such as Genesys, Avaya, Cisco etc. is an added advantage. Key Responsibilities: Interface with business customers, gathering and understanding requirements Interface with customer and Genesys data science teams in discovery, extraction, loading, data transformation, and analysis of results Define and utilize data intuition process to cleanse and verify the integrity of customer Genesys data to be used for analysis Implement, own, and improve data pipelines using best practices in data modelling, ETL/ELT processes Build, improve, and provide ongoing optimization of high quality models Work with PS Engineering to deliver specific customer requirements and report back customer feedback, issues and feature requests. Continuous improvement in reporting, analysis, overall process. Visualize, present and demonstrate findings as required. Perform knowledge transfer to customer and internal teams. Communicate within the global community respecting cultural, language and time zone variations Demonstrate flexibility to adjust working hours to match customer and team interactions Minimum Requirements : Knowledge of SQL language and cloud-based technologies Data warehousing concepts, data modelling, metadata management Data lakes, multi-dimensional models, data dictionaries Migration to AWS or Azure Snowflake platform Performance tuning and setting up resource monitors Snowflake modelling - roles, databases, schemas SQL performance measuring, query tuning, and database tuning ETL tools with cloud-driven skills Ability to build analytical solutions and models Coding in languages like Python, Java, JavaScript Root cause analysis of models with solutions Managing sets of XML, JSON, and CSV from disparate sources SQL-based databases like Oracle SQL Server, Teradata, etc. Snowflake warehousing, architecture, processing, administration Data ingestion into Snowflake Enterprise-level technical exposure to Snowflake applications Desirable Skills: Sql-advanced, Snowflake. Good have AWS- Lambda , S3. Expertise in Data Engineering Prior Working experience with Contact center reporting with Genesys cx, Genesys engage , pure connect, cisco , and Avaya etc.. is an added advantage. Prior working experience with Genesys Cloud - Preferrable
Posted 2 months ago
10 - 15 years
22 - 30 Lacs
Bengaluru
Work from Office
Platform Strategy Vision: Define and own the roadmap for the Finance Data Hub platform, ensuring it aligns with business objectives and supports broader data product initiatives. Technical Requirements: Collaborate with architects, data engineers, and governance teams to define and prioritise platform capabilities, including scalability, security, resilience, and data lineage. Integration Management: Ensure the platform seamlessly integrates with data streams and serves UI products, enabling efficient data ingestion, transformation, storage, and consumption. Infrastructure Coordination: Work closely with infrastructure and DevOps teams to ensure platform performance, cost optimisation, and alignment with enterprise architecture standards. Governance Compliance: Partner with data governance and security teams to ensure the platform adheres to data management standards, privacy regulations, and security protocols. Backlog Management: Own and prioritise the platform development backlog, balancing technical needs with business priorities, and ensuring timely delivery of enhancements. Agile Leadership: Support and often lead agile ceremonies, write clear user stories focused on platform capabilities, and facilitate collaborative sessions with technical teams. Stakeholder Communication: Provide clear updates on platform progress, challenges, and dependencies to stakeholders, ensuring alignment across product and engineering teams. Continuous Improvement: Regularly assess platform performance, identify areas for optimization, and champion initiatives that enhance reliability, scalability, and efficiency. Risk Management: Identify and mitigate risks related to platform stability, security, and data integrity. Skills Must have Proven 10+ years experience as a Product Manager focused on data platforms, infrastructure, or similar technical products. Strong understanding of data platforms and infrastructure, including data ingestion, processing, storage, and access within modern data ecosystems. Experience with cloud data platforms (e.g., Azure, AWS, GCP) and knowledge of data lake architectures. Understanding of data governance, security, and compliance best practices. Strong stakeholder management skills, particularly with technical teams (engineering, architecture, security). Experience managing product backlogs and roadmaps in an Agile environment. Ability to balance technical depth with business acumen to drive effective decision-making. Nice to have Experience with financial systems and data sources, such as HFM, Fusion, or other ERPs Knowledge of data orchestration and integration tools (e.g., Apache Airflow, Azure Data Factory). Experience with transitioning platforms from legacy technologies (e.g., Teradata) to modern solutions. Familiarity with cost optimization strategies for cloud platforms.
Posted 2 months ago
3 - 8 years
17 - 22 Lacs
Bengaluru
Work from Office
Job_Description":" Key Responsibilities: Work closely with clients to understand their business requirements and design data solutions that meet their needs. Develop and implement end-to-end data solutions that include data ingestion, data storage, data processing, and data visualization components. Design and implement data architectures that are scalable, secure, and compliant with industry standards. Work with data engineers, data analysts, and other stakeholders to ensure the successful delivery of data solutions. Participate in presales activities, including solution design, proposal creation, and client presentations. Act as a technical liaison between the client and our internal teams, providing technical guidance and expertise throughout the project lifecycle. Stay up-to-date with industry trends and emerging technologies related to data architecture and engineering. Develop and maintain relationships with clients to ensure their ongoing satisfaction and identify opportunities for additional business. Understands Entire End to End AI Life Cycle starting from Ingestion to Inferencing along with Operations. Exposure to Gen AI Emerging technologies. Exposure to Kubernetes Platform and hands on deploying and containorizing Applications. Good Knowledge on Data Governance, data warehousing and data modelling. Requirements: Bachelors or Masters degree in Computer Science, Data Science, or related field. 10+ years of experience as a Data Solution Architect, with a proven track record of designing and implementing end-to-end data solutions. Strong technical background in data architecture, data engineering, and data management. Extensive experience on working with any of the hadoop flavours preferably Data Fabric. Experience with presales activities such as solution design, proposal creation, and client presentations. Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and related technologies such as data warehousing, data lakes, and data streaming. Experience with Kubernetes and Gen AI tools and tech stack. Excellent communication and interpersonal skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Strong problem-solving skills, with the ability to analyze complex data systems and identify areas for improvement. Strong project management skills, with the ability to manage multiple projects simultaneously and prioritize tasks effectively. Tools and Tech Stack Data Architecture and Engineering: Hadoop Ecosystem: Preferred: Cloudera Data Platform (CDP) or Data Fabric. Tools: HDFS, Hive, Spark, HBase, Oozie. Data Warehousing: Cloud-based: Azure Synapse, Amazon Redshift, Google Big Query, Snowflake, Azure Synapsis and Azure Data Bricks On-premises: , Teradata, Vertica Data Integration and ETL Tools: Apache NiFi, Talend, Informatica, Azure Data Factory, Glue. Cloud Platforms: Azure (preferred for its Data Services and Synapse integration), AWS, or GCP. Cloud-native Components: Data Lakes: Azure Data Lake Storage, AWS S3, or Google Cloud Storage. Data Streaming: Apache Kafka, Azure Event Hubs, AWS Kinesis. HPE Platforms: Data Fabric, AI Essentials or Unified Analytics, HPE MLDM and HPE MLDE AI and Gen AI Technologies: AI Lifecycle Management: MLOps: MLflow, KubeFlow, Azure ML, or SageMaker, Ray Inference tools: TensorFlow Serving, K Serve, Seldon Generative AI: Frameworks: Hugging Face Transformers, LangChain. Tools: OpenAI API (e.g., GPT-4) Orchestration and Deployment: Kubernetes: Platforms: Azure Kubernetes Service (AKS)or Amazon EKS or Google Kubernetes Engine (GKE) or Open Source K8 Tools: Helm CI/CD for Data Pipelines and Applications: Jenkins, GitHub Actions, GitLab CI, or Azure DevOps ","
Posted 2 months ago
4 - 7 years
5 - 13 Lacs
Bengaluru
Work from Office
Data Engineer Please find the JD below for the ETL Snowflake developer role. 4 - 5 years of Strong experience on advanced SQL any Database. (preferably snowflake , Oracle or Teradata,). Extensive experience in data integration area and hands on experience in any of the ETL tools like Datastage , Informatica , Snaplogic etc. Able to transform technical requirements into data collection query. Should be capable of working with business and other IT teams and convert the requirements into queries. Good understanding of ETL architecture and design. Good knowledge in UNIX commands, Databases, SQL, PL/SQL Good to have experience on AWS Glue • Good to have knowledge on Qlik replication tool.
Posted 2 months ago
4 - 6 years
3 - 6 Lacs
Pune
Work from Office
Key Responsibilities: Design and develop data pipelines using Linux shell scripting (Bash, Perl, etc.) Work with Snowflake and Teradata databases to optimize data models, queries, and performance
Posted 2 months ago
7 - 12 years
30 - 40 Lacs
Bengaluru
Hybrid
Proficient: Data Modelling, Python, ETL, (Big Data/ MPP), Airflow, Kafka, (Presto/ Spark), (Power BI/ Tableau) Proficient: (Teradata/ AWS Redshift/ Google BigQuery/ Azure Synapse Analytics) 79 Yrs Old Reputed MNC Company
Posted 2 months ago
4 - 9 years
4 - 8 Lacs
Bengaluru
Work from Office
We are seeking an experienced (Min 4 years) Data Engineer with expertise in both Snowflake and Teradata to join our team. The ideal candidate will be responsible for designing, developing, and maintaining data solutions that leverage the strengths of both platforms. This role requires a deep understanding of data warehousing, ETL processes, and SQL. Key Responsibilities: Design, develop, and maintain data pipelines and ETL processes using Snowflake and Teradata. Optimize data storage and retrieval to ensure efficient performance across the platforms. Collaborate with stakeholders to understand data requirements and deliver solutions that meet business needs. Develop and maintain SQL queries, stored procedures, and scripts for data manipulation and reporting. Ensure data quality, integrity, and security across all data solutions. Monitor and troubleshoot data workflows to ensure reliability and performance. Required Skills and Qualifications: Proven experience with Snowflake and Teradata platforms. Strong proficiency in SQL and experience with data warehousing concepts. Experience with ETL tools and processes. Familiarity with data modelling and database design principles. Excellent problem-solving and analytical skills. Strong communication skills to collaborate with technical and non-technical stakeholders. Experience with cloud platforms such as AWS, Azure Cloud. Additional Skills: Knowledge of programming languages such as Python or Java
Posted 2 months ago
4 - 10 years
6 - 10 Lacs
Chennai, Pune, Greater Noida
Work from Office
JD- Good knowledge of AbInitio Co-operating system, Knowledge in AbInitio Graphical Development Interface, unix/linux Experience in Designing the Graphs, Parameterizing the graph using PSETS Scheduling the job through Autosys Writing unix shell Scripts. Good knowledge and understanding of Plans/Conduct>IT Experience in repository(EME) commands and knowledge of code versioning. Basic Abinito debugging skills Knowledge in databases like Teradata and/or Oracle Knowledge in writing medium to complex SQL queries Ability to Debug the SQL queries. Knowledge in performance tuning of the ETL process by writing optimal SQL code Knowledge in writing stored procedures and functions in the database. Should have knowledge of working in Agile projects. The associate should possess good communication and interpersonal skills Experience in Continuous Flows Lead day to day design and ETL build Daily hands on development of ETL to ingest and load data to Databases (Teradata/Hadoop) Conduct code reviews with team members and set a high technical standard Drive agile development and a devops approach including continuous build, continuous testing As a senior developer drive the framework and components Partner closely with the business on technology requirements and strategy and create solutions and execute projects end to end from inception to deployment Understand the high level design and create low level design documents Abinitio Developer
Posted 2 months ago
7 - 13 years
40 - 45 Lacs
Bengaluru
Work from Office
FBA Inventory Optimization Services (FIOS) is looking for a results-oriented data engineering and analytics leader to help deliver analytics at scale across our product and services portfolio. Our ideal candidate will be an experienced leader who can thrive in an ambiguous and fast paced business landscape. You are passionate about working with complex datasets and are someone who loves to dive deep, analyze and turn data into insights. You should have deep expertise in analytic view of business questions, building up & refining metrics framework to measure business operation and translating data into meaningful insights. In this role, you will have ownership of end-to-end analytics development to complex questions and you ll play an integral role in strategic decision making. You should have excellent business and communication skills to be able to work with business owners to understand business challenges & opportunities, and to drive data-driven decision into process & tool improvement together with business team. Above all, you should be passionate about how insights can be used to improve both the customer and Seller experiences in doing business with Amazon. Key job responsibilities Hire, manage, coach and lead a high performing team of Business Intelligence Engineers and Data Engineers Designing, developing, troubleshooting, evaluating, deploying, and documenting data management and business intelligence systems, enabling stakeholders to manage the business and make effective decisions. Own end to end reporting and underlying ETL processes and data models for supported services. Understanding business requirements and needs of the product stake owners and define the data elements and data structure that the team should leverage to enable analytical capabilities. Working with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs. Designing and planning for solutions in the various engineering subject areas as it relates to data storage and movement solutions: data warehousing, enterprise system data architecture, data processing, data management, and data analysis. Ensuring completeness and compatibility of the technical infrastructure to support system performance, availability and architecture requirements Reviewing and participating in testing of the data design, tool design and data extracts/transforms Participate in strategic & tactical planning discussions. Be connected and influential within the Amazon BI and DE community. Work with Data engineering, Software development teams to enable the appropriate capture and storage of key data points. - 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience - 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience - 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience - 5+ years of data engineering experience - Experience managing a data or BI team - Experience communicating to senior management and customers verbally and in writing - Experience leading and influencing the data or BI strategy of your team or organization - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
Posted 2 months ago
7 - 12 years
20 - 35 Lacs
Hyderabad
Hybrid
Job Description: We are looking for a highly skilled Lead Data Engineer with 8-12 years of experience to lead a team in building and managing advanced data solutions. The ideal candidate should have extensive experience with SQL, Teradata, Ab-Initio, and Google Cloud Platform (GCP). Key Responsibilities: Lead the design, development, and optimization of large-scale data pipelines,ensuring they meet business and technical requirements. Architect and implement data solutions using SQL, Teradata, Ab-Initio, and GCP, ensuring scalability, reliability, and performance. Mentor and guide a team of data engineers in the development and execution of ETL processes and data integration solutions. Collaborate with cross-functional teams (e.g., data scientists, analysts, productmanagers) to define data strategies and deliver end-to-end data solutions. Take ownership of end-to-end data workflows, from data ingestion to transformation, storage, and accessibility. Lead performance tuning and optimization efforts for complex SQL queries and Teradata database systems. Design and implement data governance, quality, and security best practices to ensure data integrity and compliance. Manage the migration of legacy data systems to cloud-based solutions on Google Cloud Platform (GCP). Ensure continuous improvement and automation of data pipelines and workflows. Troubleshoot and resolve issues related to data quality, pipeline performance, and system integration. Stay up-to-date with industry trends and emerging technologies to drive innovation and improve data engineering practices within the team. Required Skills: 8-12 years of experience in data engineering or related roles. Strong expertise in SQL, Teradata, and Ab-Initio. In-depth experience with Google Cloud Platform (GCP), including tools like BigQuery,Cloud Storage, Dataflow, etc. Proven track record of leading teams and projects related to data engineering and ETL pipeline development. Experience with data warehousing and cloud-native storage solutions. Strong analytical and problem-solving skills. Experience in setting up and enforcing data governance, security, and compliance standards. Preferred Skills: Familiarity with additional cloud services (AWS, Azure). Experience with data modeling and metadata management. Knowledge of big data technologies like Hadoop, Spark, etc. Strong communication skills and the ability to collaborate effectively with both technical and non-technical teams.
Posted 2 months ago
4 - 7 years
14 - 20 Lacs
Bengaluru, Gurgaon
Hybrid
Role & responsibilities Data Engineering & ETL Development: Develop, optimize, and maintain ETL workflows using PySpark, Sqoop, and Hadoop. Implement data ingestion and transformation pipelines for structured and unstructured data. Integrate and migrate data between Hadoop, Teradata, and SQL-based systems. Big Data & Distributed Systems: Work with Hadoop ecosystem (HDFS, Hive, Sqoop, YARN, MapReduce). Optimize PySpark-based distributed computing workflows for performance and scalability. Handle large-scale batch processing and near-real-time data pipelines. Preferred candidate profile Database & SQL Development: Write, optimize, and debug complex SQL queries on Teradata, Hive, and other RDBMS systems. Ensure data consistency, quality, and performance tuning of databases. Automation & Scripting: Develop reusable Python scripts for automation, data validation, and process scheduling. Work with Airflow, Oozie, or similar workflow schedulers. Performance Optimization & Troubleshooting: Monitor and tune PySpark jobs and Hadoop cluster performance. Debug and optimize SQL queries, data pipelines, and ETL processes. Collaboration & Stakeholder Engagement: Work with data analysts, data scientists, and business teams to understand requirements. Provide guidance on data best practices, architecture, and governance. Strong expertise in PySpark, Hadoop ecosystem (HDFS, YARN, Sqoop, Hive, MapReduce). Proficiency in SQL, Teradata, and other relational databases. Hands-on experience with Python scripting for ETL, automation, and data processing. Experience with big data performance tuning, optimization, and debugging. Familiarity with job scheduling tools like Airflow, Oozie, or Control-M. Strong knowledge of data warehousing concepts, data modeling, and ETL frameworks. Ability to work in Agile/Scrum environments and collaborate with cross-functional teams.
Posted 2 months ago
6 - 11 years
13 - 14 Lacs
Bengaluru
Work from Office
The Tax Data Solutions team is responsible for delivering business insights and high impact analyses to the Tax function in eBay. The team addresses strategic and operational questions facing the business including process automations, business sizing and impact measurement, and data science solutions. The role is data and analytically intensive. Successful candidates will offer a strategic perspective, sound business judgment and a collaborative working style. They will possess strong intellectual curiosity, and a passion for achieving practical business impact. Primary Job Responsibilities Use SQL/Python/Alteryx to analyse data, create summarized reports and provide insights and recommendations Create Tableau dashboards for continuous monitoring of important business metrics Establish a monitoring framework for business initiatives and ensure business unit is alerted in case of Issue. Perform ad-hoc analysis to support data driven decisions for tax business Do risk assessments of new tax rules, calculate out of pocket expenses Calculate the breakage in GMV/Revenue because of tax law changes Present analysis to stakeholders independently Perform peer QC of codes and deliverables to ensure most accurate reports are sent out Build close partnership with the business unit to Identify and Explore areas of impactful analytics and operational opportunities to help drive Operational efficiencies and Cost Reduction Post-launch monitoring: measure performance of features and programs post launch and report back to product and business teams. Promote a self-serve analytics culture for the organization by educating partners on key reports, data, and interpretation best practices. Maintain a close and pro-active collaboration with other business functions like Customer Services, Legal, Product and Finance teams to ensure visibility across functions Job skills required: Intellectual curiosity, passion for problem-solving, and comfort with ambiguity Excellent technical skills for data analysis and process development, especially SQL, Python, Alteryx A proven track record of end to end analytics - problem definition, collating the required information, analyzing the results, synthesizing and communicating a compelling argument and influencing partners to act Proficient in data visualization and Self-serve tools like Tableau Experienced in handling big data (Hadoop, Teradata, etc.) Agile experience Ability to translate commercial requirements into software solutions Capable to work independently while acting as part of a global tax team Desired Qualifications: ~6 years of experience in Advanced Analytics Proven knowledge in SQL and Python programming language
Posted 2 months ago
4 - 7 years
0 - 2 Lacs
Chennai
Hybrid
Position Purpose Provide a brief description of the overall purpose of the position, why this position exists and how it will contribute in achieving the teams goal. The requested position is developer-analyst in an open environment, which requires knowledge of the mainframe, TSO, JCL, OPC environment. Responsibilities Direct Responsibilities For a predefined applications scope take care of: Design Implementation (coding / parametrization, unit test, assembly test, integration test, system test, support during functional/acceptance test) Roll-out support Documentation Continuous Improvement Ensure that SLA targets are met for above activities Handover to Italian teams if knowledge and skills are not available in ISPL Coordinate closely with Data Platform Teams’s and also all other BNL BNP Paribas IT teams (Incident coordination, Security, Infrastructure, Development teams, etc.) Collaborate and support Data Platform Teams to Incident Management, Request Management and Change Management Contributing Responsibilities Contribute to the knowledge transfer with BNL Data Platform team Help build team spirit and integrate into BNL BNP Paribas culture Contribute to incidents analysis and associated problem management Contribute to the acquisition by ISPL team of new skills & knowledge to expand its scope Technical & Behavioral Competencies Fundamental skills: IBM DataStage SQL Experience with Data Modeling and tool ERWin Important skill - knowledge of at least one of database technologies is required: Teradata Oracle SQL Server. Basic knowledge about Mainframe usage TSO, ISPF/S, Scheduler IWS, JCL Nice to have: Knowledge of MS SSIS Experience with Service Now ticketing system Knowledge of Requirements Collection, Analysis, Design, Development and Test activity Continuous improvement approaches Knowledge of Python Knowledge and experience with RedHat Linux, Windows, AIX, WAS, CFT
Posted 2 months ago
2 - 7 years
20 - 23 Lacs
Bengaluru
Work from Office
Responsibilities : Design and implement scalable data migration pipelines to transition data from relational databases and Teradata to AWS Databricks. Develop and optimize ETL processes to facilitate data extraction, transformation, and loading (ETL) workflows. Implement data transformations using Spark and SQL to ensure data is in the required format for analysis and reporting. Optimize performance of the data migration pipelines, ensuring efficient use of resources and faster migration times. Support data automation efforts, ensuring that data migration tasks are automated and run seamlessly. Conduct data validation and quality checks to guarantee accurate and reliable data migration. Collaborate with senior engineers and architects to ensure that migration processes are aligned with best practices and security requirements. Ensure the migration process is secure and efficient , mitigating any risks associated with large-scale data transfers.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Teradata is a popular data warehousing platform that is widely used by businesses in India. As a result, there is a growing demand for skilled professionals who can work with Teradata effectively. Job seekers in India who have expertise in Teradata have a wide range of opportunities available to them across different industries.
These cities are known for their thriving tech industries and have a high demand for Teradata professionals.
The average salary range for Teradata professionals in India varies based on experience levels. Entry-level roles can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the field of Teradata, a typical career path may involve progressing from roles such as Junior Developer to Senior Developer, and eventually to a Tech Lead position. With experience and skill development, professionals can take on more challenging and higher-paying roles in the industry.
In addition to Teradata expertise, professionals in this field are often expected to have knowledge of SQL, data modeling, ETL tools, and data warehousing concepts. Strong analytical and problem-solving skills are also essential for success in Teradata roles.
As you prepare for interviews and explore job opportunities in Teradata, remember to showcase your skills and experience confidently. With the right preparation and determination, you can land a rewarding role in the dynamic field of Teradata in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2