Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 - 14.0 years
35 - 40 Lacs
Pune
Work from Office
About Arctera Arctera keeps the world s IT systems working. We can trust that our credit cards will work at the store, that power will be routed to our homes and that factories will produce our medications because those companies themselves trust Arctera. Arctera is behind the scenes making sure that many of the biggest organizations in the world - and many of the smallest too - can face down ransomware attacks, natural disasters, and compliance challenges without missing a beat. We do this through the power of data and our flagship products, Insight, InfoScale and Backup Exec. Illuminating data also helps our customers maintain personal privacy, reduce the environmental impact of data storage, and defend against illegal or immoral use of information. It s a task that continues to get more complex as data volumes surge. Every day, the world produces more data than it ever has before. And global digital transformation - and the arrival of the age of AI - has set the course for a new explosion in data creation. Joining the Arctera team, you ll be part of a group innovating to harness the opportunity of the latest technologies to protect the world s critical infrastructure and to keep all our data safe. Lead Salesforce Developer (CPQ & Quote to Cash) About Arctera Arctera helps organizations around the world thrive by ensuring they can trust, access, and illuminate their data from creation to retirement. Created in 2024 from Veritas Technologies, an industry leader in secure multi-cloud data resiliency, Arctera comprises three business units: Data Compliance, Data Protection and Data Resilience. Arctera provides more than 75,000 customers worldwide with market-leading solutions that help them to manage one of their most valuable assets: data We are looking for a highly skilled Lead Salesforce Developer with 9+ years of experience in Quote-to-Cash (QTC) and Salesforce CPQ. This role will be responsible for architecting, designing, and developing scalable CPQ solutions while ensuring seamless integration within the Salesforce ecosystem. The ideal candidate will have deep technical expertise, leadership experience, and a strong understanding of end-to-end QTC processes. This is a strategic role that involves hands-on development, technical leadership, and collaboration with cross-functional teams to drive automation and efficiency in the sales and revenue lifecycle. Key Responsibilities Lead the architecture, design, and development of Salesforce CPQ and QTC solutions Define best practices, technical standards, and governance for Salesforce development to ensure scalability and performance Design and implement complex pricing models, discount structures, contract management, and automated renewals Develop and customize Salesforce CPQ using Apex, Lightning Web Components (LWC), Visualforce, SOQL, and Flows Implement advanced business logic, automations, and process improvements to support sales and revenue operations Optimize CPQ configurations, guided selling, approvals, and contract lifecycle processes Architect and develop integrations within Salesforce using REST and SOAP APIs Ensure data consistency and integrity across Salesforce environments Provide technical guidance, mentorship, and code reviews for a team of Salesforce developers Enforce coding best practices, DevOps methodologies, and CI/CD pipelines for efficient deployment and version control Collaborate with business stakeholders, Product Managers, and Solution Architects to translate business needs into scalable technical solutions Conduct regular system audits, performance tuning, and security assessments Implement robust sharing and security models to protect sensitive data Stay updated with Salesforce releases and CPQ enhancements, proactively implementing new features Work closely with Sales, Finance, IT, and Operations teams to streamline the Quote-to-Cash process Identify automation opportunities, process optimizations, and system improvements to enhance user experience Lead technical workshops, training sessions, and documentation efforts to drive adoption Qualifications & Experience 10+ years of hands-on Salesforce development experience, with at least 6+ years specializing in Salesforce CPQ and Quote-to-Cash Deep expertise in Salesforce CPQ, including product catalog, complex pricing models, discount structures, approvals, and contract lifecycle management Strong development experience with Apex, LWC, Visualforce, SOQL, and Flows, with a focus on modular and reusable code Extensive knowledge of Salesforce data architecture, security models, and governor limits Hands-on experience with API development and event-driven architectures within Salesforce Strong understanding of CI/CD, version control, and deployment automation Experience in Salesforce DevOps, SFDX, Scratch Orgs, and Metadata API Excellent problem-solving and debugging skills, with the ability to resolve complex technical issues Strong communication, stakeholder management, and team leadership experience Salesforce certifications such as Salesforce CPQ Specialist, Platform Developer II, or Application/System Architect are highly preferred Knowledge of front-end technologies such as JavaScript, HTML5, CSS, and Aura Components is a plus
Posted 1 month ago
2.0 - 5.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Sanas is revolutionizing the way we communicate with the world s first real-time algorithm, designed to modulate accents, eliminate background noises, and magnify speech clarity. Pioneered by seasoned startup founders with a proven track record of creating and steering multiple unicorn companies, our groundbreaking GDP-shifting technology sets a gold standard. Sanas is a 200-strong team, established in 2020. In this short span, we ve successfully secured over $100 million in funding. Our innovation have been supported by the industry s leading investors, including Insight Partners, Google Ventures, Quadrille Capital, General Catalyst, Quiet Capital, and other influential investors. Our reputation is further solidified by collaborations with numerous Fortune 100 companies. With Sanas, you re not just adopting a product; you re investing in the future of communication. We re looking for a sharp, hands-on Data Engineer to help us build and scale the data infrastructure that powers cutting-edge audio and speech AI products. You ll be responsible for designing robust pipelines, managing high-volume audio data, and enabling machine learning teams to access the right data fast. As one of the first dedicated data engineers on the team, youll play a foundational role in shaping how we handle data end-to-end, from ingestion to training-ready features. Youll work closely with ML engineers, research scientists, and product teams to ensure data is clean, accessible, and structured for experimentation and production. Key Responsibilities : Build scalable, fault-tolerant pipelines for ingesting, processing, and transforming large volumes of audio and metadata. Design and maintain ETL workflows for training and evaluating ML models, using tools like Airflow or custom pipelines. Collaborate with ML research scientists to make raw and derived audio features (e.g., spectrograms, MFCCs) efficiently available for training and inference. Manage and organize datasets, including labeling workflows, versioning, annotation pipelines, and compliance with privacy policies. Implement data quality, observability, and validation checks across critical data pipelines. Help optimize data storage and compute strategies for large-scale training. Qualifications : 2-5 years of experience as a Data Engineer, Software Engineer, or similar role with a focus on data infrastructure. Proficient in Python, SQL, and working with distributed data processing tools (e.g., Spark, Dask, Beam). Experience with cloud data infrastructure (AWS/GCP), object storage (e.g.,S3), and data orchestration tools. Familiarity with audio data and its unique challenges (large file sizes, time-series features, metadata handling) is a strong plus. Comfortable working in a fast-paced, iterative startup environment where systems are constantly evolving. Strong communication skills and a collaborative mindset you ll be working cross-functionally with ML, infra, and product teams. Nice to Have : Experience with data for speech models like ASR, TTS, or speaker verification. Knowledge of real-time data processing (e.g., Kafka, WebSockets, or low-latency APIs). Background in MLOps, feature engineering, or supporting model lifecycle workflows. Experience with labeling tools, audio annotation platforms, or human-in-the-loop systems. Joining us means contributing to the world s first real-time speech understanding platform revolutionizing Contact Centers and Enterprises alike. Our technology empowers agents, transforms customer experiences, and drives measurable growth. But this is just the beginning. Youll be part of a team exploring the vast potential of an increasingly sonic future
Posted 1 month ago
9.0 - 12.0 years
12 - 17 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled and experienced Ab Initio Lead with 10 years of hands-on experience in ETL development preferably Ab Initio. The ideal candidate should have strong technical expertise, leadership qualities, and the ability to guide teams and collaborate with cross-functional stakeholders. This role involves end-to-end ownership of data integration and ETL processes using Ab Initio tools. Lead the design, development, and implementation of complex ETL solutions using Ab Initio (GDE, EME, Co>Operating System, and related components). Work closely with business analysts, data architects, and stakeholders to gather requirements and translate them into scalable data pipelines. Optimize performance of ETL jobs and troubleshoot data issues in large-scale production environments. Lead a team of Ab Initio developers and ensure adherence to development standards and best practices. Manage code reviews, performance tuning, and deployment activities. Ensure high levels of data quality and integrity through effective testing and validation procedures. Work with DevOps and infrastructure teams to support deployments and release management. Required Skills: 10+ years of hands-on experience with Ab Initio development (GDE, EME, Conduct>It, Co>Operating System, Continuous Flows, etc.). Strong experience in data warehousing concepts and ETL design patterns. Expertise in performance tuning and handling large volumes of data. Knowledge of data modeling, data governance, and metadata management. Strong problem-solving and debugging skills. Excellent communication and stakeholder management skills. Ability to lead and mentor a team of developers effectively. Lead, Abinitio
Posted 1 month ago
4.0 - 7.0 years
7 - 11 Lacs
Mumbai
Work from Office
Working as SME in data governance, metadata management and data catalog solutions, specifically on Collibra Data Governance. Client interface and consulting skills required Experience in Data Governance of wide variety of data types (structured, semi-structured and unstructured data) and wide variety of data sources (HDFS, S3, Kafka, Cassandra, Hive, HBase, Elastic Search) Partner with Data Stewards for requirements, integrations and processes, participate in meetings and working sessions Partner with Data Management and integration leads to improve Data Management technologies and processes. Working experience of Collibra operating model, workflow BPMN development, and how to integrate various applications or systems with Collibra Experience in setting up peoples roles, responsibilities and controls, data ownership, workflows and common processes Integrate Collibra with other enterprise toolsData Quality Tool, Data Catalog Tool, Master Data Management Solutions Develop and configure all Collibra customized workflows 10. Develop API (REST, SOAP) to expose the metadata functionalities to the end-users Location :Pan India
Posted 1 month ago
6.0 - 12.0 years
11 - 15 Lacs
Pune
Work from Office
As a Data Architect, you'll design and optimize data architecture to ensure data is accurate, secure, and accessible. you'll collaborate across teams to shape the data strategy, implement governance, and promote best practices enabling the business to gain insights, innovate, and make data-driven decisions at scale. Your responsibilites Responsible for defining the enterprise data architecture which streamlines, standardises, and enhances accessibility of organisational data. Elicits data requirements from senior Business stakeholders and the broader IS function, translating their needs into conceptual, logical, and physical data models. Oversees the effective integration of data from various sources, ensuring data quality and consistency. Monitors and optimises data performance, collaborating with Data Integration and Product teams to deliver changes that improve data performance. Supports the Business, Data Integration Platforms team and wider IS management to define a data governance framework that sets out how data will be governed, accessed, and secured across the organisation; supports the operation of the data governance model as a subject matter advisor. Provides advisory to Data Platform teams in defining the Data Platform architecture, providing advisory on metadata, data integration, business intelligence, and data storage needs. Supports the Data Integration Platforms team, and other senior IS stakeholders to define a data vision and strategy, setting out how the organisation will exploit its data for maximum Business value. Builds and maintains a repository of data architecture artefacts (eg, data dictionary). What we're Looking For Proven track record in defining enterprise data architectures, data models, and database/data warehouse solutions. Evidenced ability to advise on the use of key data platform architectural components (eg, Azure Lakehouse, Data Bricks, etc) to deliver and optimise the enterprise data architecture. Experience in data integration technologies, real-time data ingestion, and API-based integrations. Experience in SQL and other database management systems. Strong problem-solving skills for interpreting complex data requirements and translating them into feasible data architecture solutions and models. Experience in supporting the definition of an enterprise data vision and strategy, advising on implications and/or uplifts required to the enterprise data architecture. Experience designing and establishing data governance models and data management practices, ensuring data is correct and secure whilst still being accessible, in line with regulations and wider organisational policies. Able to present complex data-related initiatives and issues to senior non-data conversant audiences. Proven experience working with AI and Machine Learning models preferred, but not essential. What We Can Offer You We support your growth within the role, department, and across the company through internal opportunities. We offer a hybrid working model, allowing you to combine remote work with the opportunity to connect with your team in modern, welcoming office spaces. We encourage continuous learning with access to online platforms (eg, LinkedIn Learning), language courses, soft skills training, and various we'llbeing initiatives, including workshops and webinars. Join a diverse and inclusive work environment where your ideas are valued and your contributions make a difference.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Minimum of 5 years experience as a software tester with proven experience in defining and leading QA cycles Strong experience with DBT (Data Build Tool) and writing/validating SQL models. Hands-on experience with Collibra for metadata management and data governance validation. Solid understanding of data warehousing concepts and ETL/ELT processes. Proficiency in SQL for data validation and transformation testing. Familiarity with version control tools like Git. Understanding of data governance, metadata, and data quality principles. Strong analytical and problem-solving skills with attention to detail.
Posted 1 month ago
3.0 - 8.0 years
10 - 14 Lacs
Chennai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : OneStream Extensive Finance SmartCPM Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : Finance Background MBAPGCACFA in Finance RecommendedBachelor of EngineeringMS Azure Certification preferred Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and providing guidance to team members to foster a productive work environment. You will also engage in problem-solving discussions, ensuring that solutions are effectively implemented and meet the needs of stakeholders. Key Responsibilities:1.Lead the design, development, and enhancement of OneStream solutions to support financial consolidation, planning, and reporting.2.Collaborate with Finance, Accounting, and IT teams to gather business requirements and translate them into technical solutions within OneStream.3.Manage and maintain metadata, business rules, data integrations, and reporting structures in OneStream.4.Develop and maintain calculation scripts, business rules, and custom solutions using VB.NET or related scripting languages.5.Drive the monthly and quarterly close processes by ensuring timely and accurate data loads, validations, and reporting outputs.6.Develop and maintain dashboards, reports, and cube views for end-users across the organization.7.Provide end-user support and training, acting as a subject matter expert (SME) for OneStream across the company.8.Conduct system testing and troubleshooting, working with stakeholders and vendors as needed.9.Work on break-fixes and enhancement requests10.Deliver assigned work successfully and on-time with high-quality11.Develop documentation for delivered solution12.The candidate must have good troubleshooting skills and be able to think through issues and problems in a logical manner Technical Experience:1.3+ years of development Experience in ONESTREAM focused on but not limited to Financial Forecasting, Supply Chain Planning and HR/Sales/Incentive Compensation Management or similar use cases.2.6+ years of strong background and experience in consulting roles focused on Financial Planning/ Supply chain / Sales Performance Planning.3.Familiarity with SCRUM/Agile.4.Hands on in MS Excel using advanced formulae to develop Mock-Ups for clients.5.Ability to effectively communicate with client team and in client facing roles.6.Ability to effectively work remotely & if required Willing to travel out of Base Location Professional Attributes1.Good Communication skills as candidate will be speaking (in calls) and writing mails directly to Client.2.Candidate should have Leadership qualities and positive attitude to take the challenging task.3.Ability to manage a Team of 10 person in development and support projects4.Candidate should have good analytical and presentation skills.5.Strong sense of responsibility and positive attitude Educational Qualification :Finance Background (MBA/PG/CA/CFA in Finance) RecommendedBachelor of EngineeringMS Azure Certification preferred Qualification Finance Background MBAPGCACFA in Finance RecommendedBachelor of EngineeringMS Azure Certification preferred
Posted 1 month ago
14.0 - 19.0 years
20 - 25 Lacs
Kochi
Work from Office
Job Summary: We are seeking a highly experienced and visionary Databricks Data Architect with over 14 years in data engineering and architecture, including deep hands-on experience in designing and scaling Lakehouse architectures using Databricks . The ideal candidate will possess deep expertise across data modeling, data governance, real-time and batch processing, and cloud-native analytics using the Databricks platform. You will lead the strategy, design, and implementation of modern data architecture to drive enterprise-wide data initiatives and maximize the value from the Databricks platform. Key Responsibilities: Lead the architecture, design, and implementation of scalable and secure Lakehouse solutions using Databricks and Delta Lake . Define and implement data modeling best practices , including medallion architecture (bronze/silver/gold layers). Champion data quality and governance frameworks leveraging Databricks Unity Catalog for metadata, lineage, access control, and auditing. Architect real-time and batch data ingestion pipelines using Apache Spark Structured Streaming , Auto Loader , and Delta Live Tables (DLT) . Develop reusable templates, workflows, and libraries for data ingestion, transformation, and consumption across various domains. Collaborate with enterprise data governance and security teams to ensure compliance with regulatory and organizational data standards. Promote self-service analytics and data democratization by enabling business users through Databricks SQL and Power BI/Tableau integrations . Partner with Data Scientists and ML Engineers to enable ML workflows using MLflow , Feature Store , and Databricks Model Serving . Provide architectural leadership for enterprise data platforms, including performance optimization , cost governance , and CI/CD automation in Databricks. Define and drive the adoption of DevOps/MLOps best practices on Databricks using Databricks Repos , Git , Jobs , and Terraform . Mentor and lead engineering teams on modern data platform practices, Spark performance tuning , and efficient Delta Lake optimizations (Z-ordering, OPTIMIZE, VACUUM, etc.) . Technical Skills: 10+ years in Data Warehousing, Data Architecture, and Enterprise ETL design . 5+ years hands-on experience with Databricks on Azure/AWS/GCP , including advanced Apache Spark and Delta Lake . Strong command of SQL, PySpark, and Spark SQL for large-scale data transformation. Proficiency with Databricks Unity Catalog , Delta Live Tables , Autoloader , DBFS , Jobs , and Workflows . Hands-on experience with Databricks SQL and integration with BI tools (Power BI, Tableau, etc.). Experience implementing CI/CD on Databricks , using tools like Git , Azure DevOps , Terraform , and Databricks Repos . Proficient with streaming architecture using Spark Structured Streaming , Kafka , or Event Hubs/Kinesis . Understanding of ML lifecycle management with MLflow , and experience in deploying MLOps solutions on Databricks. Familiarity with cloud object stores (e.g., AWS S3, Azure Data Lake Gen2) and data lake architectures . Exposure to data cataloging and metadata management using Unity Catalog or third-party tools. Knowledge of orchestration tools like Airflow , Databricks Workflows , or Azure Data Factory . Experience with Docker/Kubernetes for containerization (optional, for cross-platform knowledge). Preferred Certifications (a plus): Databricks Certified Data Engineer Associate/Professional Databricks Certified Lakehouse Architect Microsoft Certified: Azure Data Engineer / Azure Solutions Architect AWS Certified Data Analytics - Specialty Google Professional Data Engineer
Posted 1 month ago
12.0 - 15.0 years
35 - 40 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
databricks ARCHITECT 12-15 YRS PUNE/BANGALORE/HYDERABAD Whats this role about? As Databricks Architect, you will be responsible for providing advisory and thought leadership on the provision of analytics environments leveraging Cloud based platforms, big data technologies, including integration with existing data and analytics platforms and tools. You will contribute in pre-sales and design, implement scalable data architectures, and information solutions covering data security, data privacy, data governance, metadata management, multi-tenancy and mixed workload management, and provide delivery oversight. Heres how youll contribute: The Databricks Architect at Zensar participates in end to end cycle from opportunity identification to its closure and takes up complete ownerships of project execution and provide valuable expertise in the project. You will do this by: Responding to client RFI, RFP documents with proper solution design including cost estimates Understanding customer requirements and create technical proposition Contributing to SoWs, technical project roadmaps, etc required for successful execution of projects leveraging Technical Scoping & Solutioning approach Managing and owning all aspects of technical development and delivery Understanding requirements and writing technical documents Ensuring code review and developing best practises Planning end to end technical scope of the project and customer engagement area including planning sprint and delivery Estimating effort, identifying risk and providing technical support whenever needed Demonstrating the ability to multitask and re-prioritizing responsibility based on dynamic requirements Leading and mentoring teams as needed Skills required to contribute: 12-15 Years of Data and Analytics experience with minimum 6+ Years in Big Data technologies and minimum 3 years in Azure / AWS / GCP Data Cloud native services including experience in Databricks Excellent communication and presentation skills. Hands-on experience with the Big Data stack (HDFS, SPARK, MapReduce, Hadoop, Sqoop, Pig, Hive, Hbase, Flume, Kafka) as well as with the No-SQL (e.g. MongoDB, HBase, Cassandra) Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Experience working on cloud platforms (AWS/Azure/Google Cloud) to analyze, re-architect and re-platform on-premise data warehouses to cloud using native or 3rd party services. Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala and good understanding of Delta lake. Good experience in designing & delivering data analytics solutions using Azure, AWS or GCP Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Databricks / SPARK or Cloud Data certification will be added advantage. Nice to have skills: Working experience with Snowflake, Reporting tools like Power BI/Tableau, Unix etc. How we d like you to lead: Take part in and host regular knowledge sharing sessions, mentor more junior members of the team and support the continuous development of our practice. We also want you to: Share project stories with the wider business. Provide recommendations and best practices for application development, platform development, and developer tools Actively stay abreast on industry best practices, share learnings, and experiment and apply cutting edge technologies Provide technical leadership and be a role model/coach to software engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Intuit as a whole and generate creative ideas for emerging business needs Create content to promote your personal brand and Zensar to the world. Live Zensars values, both, internally and externally databricks ARCHITECT 12-15 YRS PUNE/BANGALORE/HYDERABAD Whats this role about? As Databricks Architect, you will be responsible for providing advisory and thought leadership on the provision of analytics environments leveraging Cloud based platforms, big data technologies, including integration with existing data and analytics platforms and tools. You will contribute in pre-sales and design, implement scalable data architectures, and information solutions covering data security, data privacy, data governance, metadata management, multi-tenancy and mixed workload management, and provide delivery oversight. Heres how youll contribute: The Databricks Architect at Zensar participates in end to end cycle from opportunity identification to its closure and takes up complete ownerships of project execution and provide valuable expertise in the project. You will do this by: Responding to client RFI, RFP documents with proper solution design including cost estimates Understanding customer requirements and create technical proposition Contributing to SoWs, technical project roadmaps, etc required for successful execution of projects leveraging Technical Scoping & Solutioning approach Managing and owning all aspects of technical development and delivery Understanding requirements and writing technical documents Ensuring code review and developing best practises Planning end to end technical scope of the project and customer engagement area including planning sprint and delivery Estimating effort, identifying risk and providing technical support whenever needed Demonstrating the ability to multitask and re-prioritizing responsibility based on dynamic requirements Leading and mentoring teams as needed Skills required to contribute: 12-15 Years of Data and Analytics experience with minimum 6+ Years in Big Data technologies and minimum 3 years in Azure / AWS / GCP Data Cloud native services including experience in Databricks Excellent communication and presentation skills. Hands-on experience with the Big Data stack (HDFS, SPARK, MapReduce, Hadoop, Sqoop, Pig, Hive, Hbase, Flume, Kafka) as well as with the No-SQL (e.g. MongoDB, HBase, Cassandra) Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Experience working on cloud platforms (AWS/Azure/Google Cloud) to analyze, re-architect and re-platform on-premise data warehouses to cloud using native or 3rd party services. Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala and good understanding of Delta lake. Good experience in designing & delivering data analytics solutions using Azure, AWS or GCP Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Databricks / SPARK or Cloud Data certification will be added advantage. Nice to have skills: Working experience with Snowflake, Reporting tools like Power BI/Tableau, Unix etc. How we d like you to lead: Take part in and host regular knowledge sharing sessions, mentor more junior members of the team and support the continuous development of our practice. We also want you to: Share project stories with the wider business. Provide recommendations and best practices for application development, platform development, and developer tools Actively stay abreast on industry best practices, share learnings, and experiment and apply cutting edge technologies Provide technical leadership and be a role model/coach to software engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Intuit as a whole and generate creative ideas for emerging business needs Create content to promote your personal brand and Zensar to the world. Live Zensars values, both, internally and externally
Posted 1 month ago
3.0 - 6.0 years
10 - 14 Lacs
Gurugram
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL authoring, query, and cost optimisation primarily on Big Query. Python as an object-oriented scripting language. Data pipeline, data streaming and workflow management toolsDataflow, Pub Sub, Hadoop, spark-streaming Version control systemGIT & Preferable knowledge of Infrastructure as CodeTerraform. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience performing root cause analysis on internal and external data and processes to answer specific business questions Preferred technical and professional experience Experience building and optimising data pipelines, architectures and data sets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with experience in a Data Engineer role, who are also familiar with Google Cloud Platform
Posted 1 month ago
1.0 - 3.0 years
3 - 6 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Locations - Pune/Bangalore/Hyderabad/Indore Contract duration- 6 months Responsibilities Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms. Implement business and IT data requirements through new data strategies and designs across all data platforms (relational & dimensional - MUST and NoSQL-optional) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Must have Payments Background Skills Hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. Experience in team management, communication, and presentation. Experience with Erwin, Visio or any other relevant tool.
Posted 1 month ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad
Work from Office
SAP professionals design, implement and deploy SAP solutions to achieve defined business goals. Maintain skills in SAP applications process design and configuration; SAP application design, development, integration, testing and deployment. Responsibilities: Responsible for the setting up of the SAP DMS , structure for large scale enterprise. Integration process and installation SAP DMS application with other core module of SAP DMS . Keeping the scope in mind, configure the system accordingly. Able to test and document the system. Train key users Experienced with SAP Documents, Document Structure, Document Search, Document Distribution Version Management, Document Vaulting, Integrated Viewer Status Management, Document Conversion, Document workflow, Interfacing Table specification. Should be also versed with the integration aspects of SAP DMS applications with other core modules of SAP ECC. Experience with Open Text, Document-um will be an advantage. Setting up SAP Knowledge management, SAP content server, experienced with Metadata search & configuration and Integration with workflow configuration, TREX integration and Security associated with DMS implementation . Should have worked with associated standard workflows across MM, QM and PM. Experienced in the setup of archive link to support retention o
Posted 1 month ago
1.0 - 8.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Career Category Information Systems Job Description ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases, and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Role Description: We are seeking an MDM Associate Data Engineer with 2 -5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark , and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Master s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications : Any ETL certification ( e. g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .
Posted 1 month ago
5.0 - 10.0 years
4 - 7 Lacs
Hyderabad
Work from Office
A Data Governance Specialist will be responsible for several technical tasks, including developing and maintaining Collibra workflows, creating and maintaining integrations between data systems, data catalogs, and data quality tools. Additionally, other tasks related to managing metadata, master data, and business glossary will be required. Key Responsibilities: Develop and maintain Collibra workflows to support data governance initiatives. Create and maintain integrations between data systems and data governance tools. Write and maintain data quality rules to measure data quality. Work with vendors to troubleshoot and resolve technical issues related to workflows and integrations. Work with other teams to ensure adherence to DG policies and standards. Assist in implementing data governance initiatives around data quality, master data, and metadata management. Qualifications: Strong programming skills. Knowledge of system integration and use of middleware solutions. Proficiency in SQL and relational databases. Understanding of data governance, including data quality, master data, and metadata management. Willingness to learn new tools and skills. Preferred Qualifications: Proficient with Java or Groovy. Proficient with Mulesoft or other middleware. Proficient with Collibra DIP, Collibra Data Quality, and DQLabs. Experience with AWS Redshift, and Databricks.
Posted 1 month ago
4.0 - 11.0 years
12 - 13 Lacs
Bengaluru
Work from Office
Job Description: QA DBT Engineer Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Minimum of 8 years experience as a software tester with proven experience in defining and leading QA cycles Strong experience with DBT (Data Build Tool) and writing/validating SQL models. Hands-on experience with Collibra for metadata management and data governance validation. Solid understanding of data warehousing concepts and ETL/ELT processes. Proficiency in SQL for data validation and transformation testing. Familiarity with version control tools like Git. Understanding of data governance, metadata, and data quality principles. Strong analytical and problem-solving skills with attention to detail. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 1 month ago
5.0 - 15.0 years
11 - 13 Lacs
Bengaluru
Work from Office
Job Description: Collibra Data Governance Specialist Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Required Skills 5 - 15 years of experience in data governance and/or metadata management. Hands-on experience with Collibra Data Governance Center (Collibra DGC), including workflow configuration, cataloging, and operating model customization . Strong knowledge of metadata management, data lineage, and data quality principles . Hands-on experience with Snowflake Familiarity with data integration tools and AWS cloud platform Experience with SQL and working knowledge of relational databases. Understanding of data privacy regulations (e. g. , GDPR, CCPA) and compliance frameworks. Preferred Skills Certifications such as Collibra Certified Solution Architect . Experience integrating Collibra with tools like Snowflake, Tableau or other BI/analytics platforms. Exposure to DataOps, MDM (Master Data Management) , and data governance frameworks like DAMA-DMBOK. Strong communication and stakeholder management skills. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 1 month ago
2.0 - 6.0 years
4 - 7 Lacs
Kolkata
Work from Office
Data Modeling Design: Design and maintain conceptual, logical, and physical data models to support transactional, analytical, and data warehousing systems. Develop data models to ensure data consistency, integrity, and quality across multiple banking functions and applications. Define and implement the best practices for data modeling, data integration, and metadata management. Data Architecture Collaboration: Collaborate with data architects to align models with enterprise data architecture and ensure optimal performance and scalability. Work with database administrators to translate logical models into physical database structures and optimize them for performance. Data Quality Governance: Establish data standards, definitions, and quality rules to ensure data accuracy, consistency, and compliance. Create and maintain data dictionaries and metadata repositories to support data governance and facilitate efficient data access. Stakeholder Engagement: Engage with business stakeholders, data scientists, and IT teams to understand data requirements and translate business needs into robust data models. Ensure data models support key business initiatives, such as regulatory reporting, analytics, and operational efficiency. Documentation Best Practices: Develop and maintain detailed documentation, including data models, entity-relationship diagrams, and mapping specifications. Implement data modeling standards and mentor team members to promote best practices in data management. Requirements: Educational Qualification: MBA / Engineering degree with relevant industry experience. Experience: Minimum of 10 years of experience in data modeling or as a data modeler within the banking industry. Proven expertise in designing data models across at least three of the following areas: Data Warehousing Analytics BI Data Mining Data Quality Metadata Management Technical Skills: Proficiency in data modeling tools such as Erwin, IBM InfoSphere, or SAP PowerDesigner. Strong SQL skills and experience in relational database systems like Oracle, SQL Server, or DB2. Familiarity with big data technologies and NoSQL databases is a plus. Knowledge of ETL processes and tools (e. g. , Informatica, Talend) and experience working with BI tools (e. g. , Tableau, Power BI). Knowledge of SAS for analytics, data manipulation, and data management is a plus. Strong understanding of data governance frameworks, data quality management, and regulatory compliance. Soft Skills: Strong analytical and problem-solving skills, with attention to detail and accuracy. Excellent communication and interpersonal skills, with the ability to translate technical data concepts for business stakeholders. Proven ability to work collaboratively in a cross-functional environment and manage multiple projects.
Posted 1 month ago
10.0 - 15.0 years
5 - 9 Lacs
Mumbai
Work from Office
Role Overview : We are hiring aTalend Data Quality Developerto design and implement robust data quality (DQ) frameworks in a Cloudera-based data lakehouse environment. The role focuses on building rule-driven validation and monitoring processes for migrated data pipelines, ensuring high levels of data trust and regulatory compliance across critical banking domains. Key Responsibilities : Design and implement data quality rules using Talend DQ Studio , tailored to validate customer, account, transaction, and KYC datasets within the Cloudera Lakehouse. Create reusable templates for profiling, validation, standardization, and exception handling. Integrate DQ checks within PySpark-based ingestion and transformation pipelines targeting Apache Iceberg tables . Ensure compatibility with Cloudera components (HDFS, Hive, Iceberg, Ranger, Atlas) and job orchestration frameworks (Airflow/Oozie). Perform initial and ongoing data profiling on source and target systems to detect data anomalies and drive rule definitions. Monitor and report DQ metrics through dashboards and exception reports. Work closely with data governance, architecture, and business teams to align DQ rules with enterprise definitions and regulatory requirements. Support lineage and metadata integration with tools like Apache Atlas or external catalogs. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 5–10 years in data management, with 3+ years in Talend Data Quality tools. Platforms Experience in Cloudera Data Platform (CDP) , with understanding of Iceberg , Hive , HDFS , and Spark ecosystems. Languages/Tools Talend Studio (DQ module), SQL, Python (preferred), Bash scripting. Data Concepts Strong grasp of data quality dimensions—completeness, consistency, accuracy, timeliness, uniqueness. Banking Exposure Experience with financial services data (CIF, AML, KYC, product masters) is highly preferred.
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Location: Hyderabad (Gachibowli) Experience: 3-5 Years Industry: Healthcare (Preferred) Overview We are seeking a results-oriented Digital Marketing Analyst/Executive with a strong background in SEO, especially Local SEO, and a passion for driving measurable results. If you have experience in healthcare and have successfully scaled organic traffic across multiple locations, this is your opportunity to shine. Your mission will be to take our web traffic from 25K to 150K+ in just 6-9 months . Roles & Responsibilities: Local SEO Mastery Manage and optimize Google Business Profiles (GMB) for multiple locations. Implement geo-targeted strategies to rank for "near me" and location-specific search terms . Ensure NAP (Name, Address, Phone) consistency across directories. Build and optimize location-specific landing pages and blog content . Organic Traffic Growth & SEO Strategy Design and execute end-to-end SEO strategies for aggressive traffic growth. Conduct keyword research, competitor analysis, and SERP audits. Optimize on-page elements: metadata, headings, schema, and content structures. Develop content clusters and pillar pages to build domain authority. Link Building Execution Lead white-hat link-building campaigns : guest blogging, PR outreach, citations. Build links from niche healthcare directories, influencers, and forums. Monitor and maintain a healthy backlink profile. Analytics & Reporting Track performance using Google Analytics 4, Search Console, Looker Studio . Set and measure KPIs like traffic growth, lead generation, and keyword rankings. Conduct regular SEO audits and performance reviews. Multi-Location SEO Management Coordinate SEO efforts across 25+ physical locations . Resolve common SEO issues such as duplicate content, cannibalization, and internal linking. Ensure that each location page is SEO-optimized and user-friendly . Collaboration & Strategy Work closely with content creators, designers, developers, and paid media teams. Provide SEO input for UX/CRO improvements. Stay current on SEO trends, Google algorithm updates, and AI-driven SEO tools. Candidate Requirements 3-5 years of proven experience in digital marketing with a focus on SEO. Experience working in or with the healthcare sector . Demonstrated ability to grow organic traffic 3x-5x within a short time frame. Proficiency in local SEO management for 20+ locations . Expertise with tools like Ahrefs, SEMrush, Screaming Frog, GMB, and GA4. Strong communication, analytical, and project management skills. Note : The selected candidate will be required to work on-site at the clients location, 6 days a week.
Posted 1 month ago
3.0 - 6.0 years
10 - 14 Lacs
Bengaluru
Work from Office
As an entry level Application Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems. In your role, you may be responsible for: Working across the entire system architecture to design, develop, and support high quality, scalable products and interfaces for our clients Collaborate with cross-functional teams to understand requirements and define technical specifications for generative AI projects Employing IBM's Design Thinking to create products that provide a great user experience along with high performance, security, quality, and stability Working with a variety of relational databases (SQL, Postgres, DB2, MongoDB), operating systems (Linux, Windows, iOS, Android), and modern UI frameworks (Backbone.js, AngularJS, React, Ember.js, Bootstrap, and JQuery) Creating everything from mockups and UI components to algorithms and data structures as you deliver a viable product Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL authoring, query, and cost optimisation primarily on Big Query. Python as an object-oriented scripting language. Data pipeline, data streaming and workflow management toolsDataflow, Pub Sub, Hadoop, spark-streaming Version control systemGIT & Preferable knowledge of Infrastructure as CodeTerraform. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience performing root cause analysis on internal and external data and processes to answer specific business questions. Preferred technical and professional experience Experience building and optimising data pipelines, architectures and data sets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with experience in a Data Engineer role, who are also familiar with Google Cloud Platform.
Posted 1 month ago
10.0 - 12.0 years
8 - 13 Lacs
Mumbai
Work from Office
John Cockerill, enablers of opportunities Driven since 1817 by the entrepreneurial spirit and thirst for innovation of its founder, the John Cockerill Group develops large-scale technological solutions to meet the needs of its time: facilitating access to low carbon energies, enabling sustainable industrial production, preserving natural resources, contributing to greener mobility, enhancing security and installing essential infrastructures. Its offer to businesses, governments and communities consists of services and associated equipment for the sectors of energy, defence, industry, the environment, transports, and infrastructures. With over 6,000 employees, John Cockerill achieved a turnover of 1,209 billion in 2023 in 29 countries, on 5 continents. Location - Navi Mumbai The John Cockerill Group develops large-scale technological solutions to meet the needs of our time: preserving natural resources, contributing to greener mobility, producing sustainably, combating insecurity, and facilitating access to renewable energy. Its offer to companies, governments and local authorities takes the form of services and associated equipment for the energy, defense, industry, environment, transport, and infrastructure sectors. Driven since 1817 by the entrepreneurial spirit and thirst for innovation of its founder, the group employs 6,000 people who have enabled it to achieve revenues of more than one billion euros in 2020 in 23 countries on 5 continents. Context The John Cockerill group manufactures and supplies water electrolysis equipment for hydrogen production. John Cockerill develops integrated solutions including power supply, water treatment, compressor and H2 storage, hydrogen refueling station. To support the development of the Hydrogen Business Line, John Cockerill is looking for an Analyst and SAP Data Manager . The role may evolve according to business needs. The Analyst and SAP Data Manager will report to the Supply Chain Process Manager. Responsibilites The Analyst and SAP Data Manager will be responsible for SAP Materiel Requests Management Validate material master data creation requests. Follow-up pending requests, edit weekly status. Be the point of contact for Supply Chain SAP Master data related topics. Escalate accordingly when the agreement cannot be reached. The Analyst and SAP Data Manager will be responsible for Creation and maintenance : Create Supply Chain master data in SAP, maintain and complete in accordance with requirement. Follow up the delivery of Master Data, both in terms of deadlines and quality. Cleanse and correct incorrect data. The Analyst and SAP Data Manager will also be responsible for the maintenance and optimization of the team s SharePoint environment. This includes: Regularly cleaning and organizing folders and files to ensure accessibility, Adding and maintaining metadata to improve document classification and searchability, Ensuring compliance with internal data governance policies, Collaborating with team members to ensure SharePoint content remains up-to-date and relevant. The Analyst and SAP Data Manager will work closely with Global Master Data Officer, Configuration Management, Engineering Department, Industrialization Engineers, Quality, Procurement, Project Management and IT Department. Profile Min 10 to 12 years experience in setting up and maintaining Master Data in the industrial sector. Experience with SAP MM is an asset. Good technical competencies. Experience with SharePoint administration and metadata management Accurate in keeping track of open actions and realizations. Resilient and solution-oriented (working on developing projects in an environment and business that is expected to grow exponentially in the coming years.) Solution and result oriented. Reliable. Team player. . Do you want to work for an innovative company that will allow you to take up technical challenges on a daily basis? ! Discover our job opportunities in details on www.johncockerill.com
Posted 1 month ago
3.0 - 8.0 years
45 - 50 Lacs
Hyderabad
Work from Office
Payroll Technology at Amazon is all about enabling our business to perform at scale as efficiently as possible with no defects. As Amazons workforce grows, both in size and geography, Amazons payroll operations become increasingly complex, and our customers are asked to do more with less. Process can only get them so far, and thats where we come in with technology solutions to integrate and automate systems, detect defects before payment, and provide insights. As a data engineer in payroll, you will have to onboard payroll vendors across various geographies by building versatile and scalable design solutions. Having strong written and verbal communication, and the ability to communicate with end users in non-technical terms, is vital to your long-term success. The ideal candidate will have experience working with large datasets, distributed computing technologies and service-oriented architecture. The candidate should relish working with large volumes of data, and enjoys the challenge of highly complex technical contexts. He/she should be an expert with data modeling, ETL design and business intelligence tools and has hand-on knowledge on columnar databases. He/she is a self-starter, comfortable with ambiguity, able to think big and enjoys working in a fast-paced team. Responsibilities: Design, build and own all the components of a high-volume data warehouse end to end. Build efficient data models using industry best practices and metadata for ad-hoc and pre-built reporting Provide wing-to-wing data engineering support for project lifecycle execution (design, execution and risk assessment) Interface with business customers, gathering requirements and delivering complete data & reporting solutions owning the design, development, and maintenance of ongoing metrics, reports, dashboards, etc. to drive key business decisions Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Willing to learn and develop strong skill set in AWS technologies As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. A day in the life You are expected to do data modelling, database design, build data pipelines as per Amazon standards, design reviews, and supporting data privacy and security initiatives. You will attend regular stand-up meetings and provide your updates. You will keep an eye out for opportunities to improve the product or user experience and suggest those enhancements. You will participate in requirement grooming meetings to ensure the use cases we deliver are complete and functional. You will take your turn at on-call and own production operational maintenance. You will respond to customer issues and monitor databases for healthy state and performance. About the team Our mission is to build applications which can solve challenges Global Payroll Operations teams face on daily basis, automate the tasks they perform manually, provide them seamless experience by integrating with other dependent systems, and eventually reduce Pay Defects and improve pay accuracy 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
Posted 1 month ago
3.0 - 6.0 years
10 - 14 Lacs
Hyderabad
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in the integration efforts between Alation and Manta, ensuring seamless data flow and compatibility. Collaborate with cross-functional teams to gather requirements and design solutions that leverage both Alation and Manta platforms effectively. Develop and maintain data governance processes and standards within Alation, leveraging Manta's data lineage capabilities. Analyze data lineage and metadata to provide insights into data quality, compliance, and usage patterns.. Preferred technical and professional experience Lead the evaluation and implementation of new features and updates for both Alation and Manta platforms Ensuring alignment with organizational goals and objectives. Drive continuous improvement initiatives to enhance the efficiency and effectiveness of data management processes, leveraging Alati
Posted 1 month ago
3.0 - 8.0 years
7 - 11 Lacs
Mumbai
Work from Office
3+ years of hands-on experience on Collibra tool. Knowledge of Collibra DGC version 5.7 and onward Experience on Spring boot development Experience on Groovy and Flow able for BPMN workflow Development. Experience in both Business And Technical Metadata Experience on platform activity like job server setup and upgrade " Working as SME in data governance, metadata management and data catalog solutions, specifically on Collibra Data Governance. Client interface and consulting skills required Experience in Data Governance of wide variety of data types (structured, semi-structured and unstructured data) and wide variety of data sources (HDFS, S3, Kafka, Cassandra, Hive, HBase, Elastic Search) Partner with Data Stewards for requirements, integrations and processes, participate in meetings and working sessions Partner with Data Management and integration leads to improve Data Management technologies and processes. Working experience of Collibra operating model, workflow BPMN development, and how to integrate various applications or systems with Collibra Experience in setting up peoples roles, responsibilities and controls, data ownership, workflows and common processes Integrate Collibra with other enterprise toolsData Quality Tool, Data Catalog Tool, Master Data Management Solutions Develop and configure all Collibra customized workflows 10. Develop API (REST, SOAP) to expose the metadata functionalities to the end-users Location :Pan India
Posted 1 month ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, We are looking for a Data Governance Specialist to establish and maintain data governance policies, ensuring data quality, compliance, and responsible usage across the organization. Key Responsibilities: Define and enforce data governance policies and standards. Work with data owners and stewards to improve data quality. Monitor compliance with data privacy regulations (GDPR, HIPAA, etc.). Support metadata management, data lineage, and cataloging initiatives. Promote data literacy across departments. Required Skills & Qualifications: Experience with data governance tools (Collibra, Alation, Informatica). Knowledge of regulatory frameworks and data privacy laws. Strong analytical and documentation skills. Understanding of data architecture, MDM, and data stewardship. Excellent stakeholder management and communication skills. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France