Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 30 Lacs
Noida
Work from Office
Your Job Responsibilities Optimize content structure, internal linking, metadata, schema, and keyword targeting Conduct content audits and recommend improvements for search visibility Collaborate with content writers to drive SEO-first content creation Build and manage a strong backlink profile through ethical link-building strategies Identify PR, guest blogs and partnership opportunities to improve domain authority Monitor competitor backlink strategies and recommend improvements Conduct regular technical audits using tools like Screaming Frog, SEMrush, Ahrefs. Resolve issues related to crawlability, indexation, page speed, and mobile responsiveness. Work with developers to implement structured data, canonical tags, and site architecture improvements. Track performance using GA4, GSC, SEMrush, and other analytics tools Define and monitor KPIs for organic growth and technical health Prepare regular SEO performance reports for leadership Work closely with content, design, dev, and product marketing teams Guide the website migration process, if any, (e.g., from Drupal to Webflow) with minimal SEO impact Collaborate with HubSpot and Salesforce users for marketing automation alignment Must-have Qualifications, Skills & Experience 8+ years of hands-on experience in On-page, Off-page, and Technical SEO Proven success in B2B tech/SaaS, preferably cloud or IT services Strong knowledge of Google algorithms, Core Web Vitals, and schema implementation Proficiency in tools like GA4, GSC, SEMrush, Ahrefs, Screaming Frog, and HubSpot Experience working with CMSs like Drupal, Hubspot and/or Webflow Strong analytical, communication, and project management skills Self-starter with the ability to work in a fast-paced, results-driven environment Good to have Skills & Experience Experience with international SEO (India, US markets) Understanding of FinOps or cloud ecosystem (AWS, Azure, GCP) Familiarity with Salesforce and SEO automation tools Manager/Deputy Manager SEO 8-12 years Noida Share this Job About CloudKeeper CloudKeeper is a cloud cost optimization partner that combines the power of group buying & commitments management, expert cloud consulting & support, and an enhanced visibility & analytics platform to reduce cloud cost & help businesses maximize the value from the cloud. A certified AWS Premier Partner and Google Cloud Partner, CloudKeeper has helped 400+ global companies save an average of 20% on their cloud bills, modernize their cloud set-up and maximize value all while maintaining flexibility and avoiding any long-term commitments or cost. CloudKeeper hived off from TO THE NEW, digital technology services company with 2500+ employees and an 8-time GPTW winner. Our Story 15+ Years in Business 250+ CKers & growing 400+ Customers Recognized by Connect with us on Speak with our advisors to learn how you can take control of your Cloud Cost
Posted 1 month ago
7.0 - 12.0 years
9 - 13 Lacs
Gurugram, Bengaluru
Work from Office
PAN INDIA - Data migration Exp- 7 to 12 yrs Loc- PAN India Handson Experience in Data Migration Knowledge of Salesforce Objects and Metadata Knowledge of SQL server Good Knowledge of excel Understanding of Data Loader, Inspector and workbench Good Knowledge of SQL Data Integration knowledge Admin and PD1 certified. Data Loader, Data Migration, Salesforce Data Migration
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Work from Office
The Connected Customer Experience (CCX) team creates consumer-grade digital experiences and products that help our customers and partners be successful and realize the value of their ServiceNow investment. Leveraging the latest technologies, and built on ServiceNow s intelligent platform, we deliver a seamless, personalized experience at every step of our customers journey. The products we build power digital business for ServiceNow and can even become commercially available. As a Staff AI/ML Software Engineer within CCX you will build data pipeline, ML Models, secure code thats scalable and reusable. Grow our business by bringing internal products to the world and find new ways to personalize with AI/ML and simplify how employees and customers work. Implement software that empowers internal customers to extend what it can do to meet their specific needs, as our own "customer zero." What you get to do in this role: Design and build scalable search ranking, indexing and personalization systems. Develop real-time and batch ML models using embeddings, collaborative filtering, and deep learning. Integrate user behavior signals, session data, and content metadata to optimize relevance. Experience working with LLM technologies, including developing generative and embedding techniques, modern model architectures, retrieval-augmented generation (RAG), fine tuning / pre-training LLM (including parameter efficient fine-tuning), Deep reinforcement learning and evaluation benchmarks Collaborate cross-functionally with product, data, and infra teams to deploy experiments and measure impact. Optimize retrieval, filtering, and ranking algorithms in production search pipelines. Real-time Personalization using query Embeddings for Search Ranking Monitor model performance and continuously iterate using A/B testing and offline evaluation metrics Experience in MLOps and model governance Strong analytical and quantitative problem-solving ability Deep expertise in distributed computing strategies in Azure, AWS or GCP Cluster, enhancing the parallel processing capabilities To be successful in this role you have: Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI-powered tools,
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Hiring for Salesforce Developer - Hyderabad Job Summary: We are seeking a skilled Salesforce Developer with 4 years of hands-on experience in Salesforce development and customization. The ideal candidate will be responsible for designing, developing, and maintaining Salesforce applications using Apex, Visualforce, Lightning Components, and integrations with third-party systems. This role requires strong technical acumen, problem-solving ability, and effective communication skills. Key Responsibilities: Develop customized solutions within the Salesforce platform using Apex, Visualforce, and Lightning Web Components (LWC). Collaborate with cross-functional teams (Admins, Architects, Business Analysts) to gather requirements and deliver scalable solutions. Integrate Salesforce with other systems using REST/SOAP APIs, middleware, or ETL tools. Perform unit testing and participate in code reviews to ensure code quality and adherence to best practices. Maintain and enhance existing Salesforce applications. Implement and manage workflows, validation rules, process builders, and flows. Deploy metadata using change sets, ANT, or CI/CD pipelines. Stay current on Salesforce releases and recommend new features or optimizations. Required Qualifications: Bachelors degree in Computer Science, Information Technology, or related field. 4+ years of hands-on Salesforce development experience. Proficiency in Apex, SOQL, SOSL, Visualforce, LWC, and Aura components. Experience with Salesforce APIs, integration patterns, and middleware (e.g., MuleSoft, Dell Boomi, etc.). Strong understanding of Salesforce data model and security model. Familiar with Agile/Scrum methodologies and DevOps tools (e.g., Git, Copado, Jenkins). Salesforce Platform Developer I certification required.
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Skills- HFM, Planning/PBCS Shift- Every month shift changes- shift allowance will be provided Notice period - Immediate Location - Bangalore Job Description Qualifications 7years and above experience required Sound understanding of HFM, Planning/PBCS and Essbase functionality - Security, metadata, dimensionality, etc. Individual must have hands on experience on Hyperion Infrastructure maintenance and patching Understanding of Accounting/Finance processes and terminology Processes - Month-end close, budgets, Forecasts, etc. Terminology: Debits, Credits, Assets, Liabilities, at least high level understanding P&L and Balance Sheet Familiarity with HFM and Planning Rules (Proficiency not required, but ability to understand and learn) Sound understanding of FDMEE/DM mappings Proficiency in using Excel Smartview Add-in Ability to understand how automated job schedules work, trouble-shoot errors and failures Ability to reconcile data between Applications. Overall problem-solving and analytical skills Have hands on experience on Hyperion Infrastructure maintenance and patching
Posted 1 month ago
1.0 - 3.0 years
3 - 5 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
EDMCS consultant with 8+ yrs of relevant experience in below areas: Proficiency in dimension management, hierarchy maintenance, and metadata governance. Experience in creating applications and views for managing reference data across EPM and ERP systems. Understanding of data validation, workflow approvals, and change tracking. Integration with EPBCS, FCCS, ARCS, and external ERP systems like Oracle Fusion or SAP. Ability to manage data governance policies and audit history. EDMCS consultant with 8+ yrs of relevant experience in below areas: Proficiency in dimension management, hierarchy maintenance, and metadata governance. Experience in creating applications and views for managing reference data across EPM and ERP systems. Understanding of data validation, workflow approvals, and change tracking. Integration with EPBCS, FCCS, ARCS, and external ERP systems like Oracle Fusion or SAP. Ability to manage data governance policies and audit history.
Posted 1 month ago
5.0 - 10.0 years
50 - 100 Lacs
Bengaluru
Work from Office
. Roles and Responsibility Job Overview We are looking for a savvy Data Engineer to join our growing team of data engineers. Thehire will be responsible for expanding and optimizing our data and data pipeline architecture,as well as optimizing data flow and collection for cross functional teams. The ideal candidateis an experienced data pipeline builder and data wrangler who enjoys optimizing data systemsand building them from the ground up. The Data Engineer will support our softwaredevelopers, database architects, and data analysts on data initiatives and will ensure optimaldata delivery architecture is consistent throughout ongoing projects. They must be selfdirectedand comfortable supporting the data needs of multiple teams,systems and products.The right candidate will be excited by the prospect of optimizing or even re-designing ourcompany s data architecture to support our next generation of products and data initiatives. Responsibilities for Data Engineer Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and spark on Azure big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing big data data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with 5+ years of experience in a Data Engineer role,having experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Azure SQL, CosmosDB, Couchbase. Experience with data pipeline and workflow management tools: Azure Data Factory, Synapse Pipeline. Experience with Azure cloud services: Databricks, Synapse Analytics, Azure Function, ADLS Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++,Scala, etc.
Posted 1 month ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Multiplatform Front End Development React, Amazon Web Services (AWS), React Native, React.js Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve working with Amazon Web Services (AWS), React.js, and React Native to develop multiplatform front-end applications. Key Responsibilities:1 Understanding requirements and involving in design and implementation 2 Collaborate with other peers, having domain expertise to build the right solution that business needs 3 Self-driven and capable of managing multiple priorities under pressure and ambiguity 4 Ability to work effectively in a fast-paced environment 5 Keen eye for usability, creating intuitive visually appealing experiences 6 UI will be used by consumers to extract the relevant data from Metadata repository Development work in the search space using Elastic search Technical Experience:1 7 plus years' experience developing with ReactJS 2 State management with Redux3 Strong Fundamental JavaScript skills ES5 and ES6,CSS skills, 4 Experience with TypeScript or ClojureScript is a plus, Thorough understanding of Reactjs and its core principles 5 React combined with Flux/Redux experience is preferred 6 Thorough understanding of Reactjs and its core principles 7 Experience developing component-driven UIs 8 Experience with data structure libraries 9 Knowledgeable in performance optimization techniques.10 Knowledge of AWS services and deployment is an advantage Additional Information:- The candidate should have a minimum of 5 years of experience in Multiplatform Front End Development React.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful multiplatform front-end solutions.- Ready to work in B shift - 12 PM to 10 PM Qualification 15 years full time education
Posted 1 month ago
1.0 - 4.0 years
3 - 9 Lacs
Siliguri
Work from Office
Admini Boosting Productivity is looking for Data Analyst to join our dynamic team and embark on a rewarding career journey Managing master data, including creation, updates, and deletion. Managing users and user roles. Provide quality assurance of imported data, working with quality assurance analysts if necessary. Commissioning and decommissioning of data sets. Processing confidential data and information according to guidelines. Helping develop reports and analysis. Managing and designing the reporting environment, including data sources, security, and metadata. Supporting the data warehouse in identifying and revising reporting requirements. Supporting initiatives for data integrity and normalization. Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems. Generating reports from single or multiple systems. Troubleshooting the reporting database environment and reports. Evaluating changes and updates to source production systems. Training end-users on new reports and dashboards. Providing technical expertise in data storage structures, data mining, and data cleansing.
Posted 1 month ago
2.0 - 5.0 years
4 - 7 Lacs
Chandigarh
Work from Office
Airawat Research Foundation is looking for Data Analyst - eGOV to join our dynamic team and embark on a rewarding career journey Managing master data, including creation, updates, and deletion. Managing users and user roles. Provide quality assurance of imported data, working with quality assurance analysts if necessary. Commissioning and decommissioning of data sets. Processing confidential data and information according to guidelines. Helping develop reports and analysis. Managing and designing the reporting environment, including data sources, security, and metadata. Supporting the data warehouse in identifying and revising reporting requirements. Supporting initiatives for data integrity and normalization. Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems. Generating reports from single or multiple systems. Troubleshooting the reporting database environment and reports. Evaluating changes and updates to source production systems. Training end-users on new reports and dashboards. Providing technical expertise in data storage structures, data mining, and data cleansing.
Posted 1 month ago
0.0 - 2.0 years
0 - 1 Lacs
Warangal, Hyderabad, Nizamabad
Work from Office
At Mango Mass Media Pvt Ltd, we are looking for a detail-oriented and tech-savvy Data Entry Executive Trainee to support our internal Digital Asset Management (DAM) system. This role involves managing metadata and ensuring accurate, consistent, and timely data entry into our proprietary content system. This is an ideal opportunity for recent graduates who have excellent typing speed, solid Excel knowledge, and an interest in the media and entertainment space. Key Responsibilities Accurately input data into the DAM system as per internal guidelines Maintain well-organized records and metadata for digital content Use MS Excel to update, clean, validate, and cross-check data Meet daily and weekly deadlines for data entry tasks Perform quality checks to ensure accuracy and consistency Quickly adapt to internal software tools and data workflows Coordinate with project and content teams for clarifications and updates Skills Required Fast and accurate typing skills Proficiency in MS Excel (basic formulas, formatting, data cleaning) General computer literacy and ease with internal tools/databases Strong attention to detail and a high level of accuracy Ability to manage time effectively and consistently meet deadlines Good to Have: Interest in digital content, media, or OTT platforms Familiarity with Digital Asset Management (DAM) systems (preferred but not mandatory) Soft Skills Quick learner with a proactive mindset Team player with a strong work ethic Adaptable to fast-changing systems and priorities Organized, focused, and deadline-driven
Posted 1 month ago
2.0 - 5.0 years
8 - 12 Lacs
Bengaluru
Work from Office
OPENTEXT - THE INFORMATION COMPANY OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do powering innovation, transforming work, and empowering digital knowledge workers. Were hiring talent that AI cant replace to help us shape the future of information management. Join us. Your Impact Good Logical and Analytical skills. Clear verbal and written communication capability.Strong communication skills. Have the ability to present and explain content to team, users and stakeholder.Experience in Pre sales and sales domain would be advantage . What the role offers Excellent knowledge of Enterprise Content Management and Governance Domain (Data Discovery / Analysis / Classification and Management). Experience on Sensitive information management or PII discovery on unstructured data. (GDPR). Strong experience on TRIM/HPRM/HP Records Manager/Opentext Content Manager Deployment, customization and Upgrade. Experience in Data Governance, Metadata Management and Data Quality frameworks using TRIM/HPRM/Records Manager/Opentext Content Manager. Strong experience in leading the end-to-end design, Architecture, and implementation of ECM (Enterprise Content Management) solutions. Strong experience on defining and implementing Master Data Governance Process including Data Quality, Metadata Management, and Business ownership. Experience in managing extremely large records sets and hundreds of users across global sites. Experience with Enterprise Content Management cloud solution and migration. Experienced on TRIM/Records Manager/Content Manager integration with third party applications like SharePoint, Iron Mountain, O365 etc. Strong knowledge of Content Manager Security, audit configurations and workflow. Hands on Experience on SQL Database queries. Microsoft C#.NET (Programming language C#) Web Services development Web App development Windows Scheduler development Content Manager/Records Manager with .NET SDK (programming) Used in above mentioned Web Services and Windows Scheduler Custom Add-Ins development. Troubleshoot problems as necessary. What you need to Suceed Experience on Capture, Imaging and Scanning products and technologies. Hands On experience on ECM products like Opentext xECM, Content Server. Cloud Certification Knowledge of Operation system (Win/Unix) and basic networking. OpenTexts efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com . Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenTexts vibrant workplace.
Posted 1 month ago
8.0 - 13.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Tech Permanent Job Description Be part of something bigger. Decode the future. At Electrolux, as a leading global appliance company, we strive every day to shape living for the better for our consumers, our people and our planet. We share ideas and collaborate so that together, we can develop solutions that deliver enjoyable and sustainable living. Come join us as you are. We believe diverse perspectives make us stronger and more innovative. In our global community of people from 100+ countries, we listen to each other, actively contribute, and grow together. All about the role: We are looking for a Engineer to help driving our global MarTech strategy forward, with a particular focus on data engineering and data science to design and scale our customer data infrastructure. You will work closely with cross-functional teams - from engineering and product to data and CX teams - ensuring scalable, future-ready solutions that enhance both consumer and business outcomes. Great innovation happens when complexity is tamed, and possibilities are unleashed. That s what we firmly believe! Join our team at Electrolux, where we lead Digital Transformation efforts. We specialize in developing centralized solutions to enhance inter-system communications, integrate third-party platforms, and establish ourselves as the Master Data within Electrolux. Our focus is on delivering high-performance and scalable solutions that consistently achieve top-quality results on a global scale. Currently operating in Europe and North America, we are expanding our footprint to all regions worldwide. About the CDI Experience Organization: The Consumer Direct Interaction Experience Organization is a Digital Product Organization responsible for delivering tech solutions to our end-users and consumers across both pre-purchase and post-purchase journeys. We are organized in 15+ digital product areas, providing solutions ranging from Contact Center, E-commerce, Marketing, and Identity to AI. You will play a key role in ensuring the right sizing, right skillset, and core competency across these product areas. What you ll do: Design and implement scalable, secure data architectures that support advanced marketing use cases across platforms such as BlueConic (CDP), SAP CDC (Identity & Consent), Iterable (Marketing Automation), Qualtrics (Experience Management), and Dynamic Yield (Personalization). Define and govern data pipelines for collecting, processing, and enriching first party and behavioural data from digital and offline touchpoints. Ability to productionize probabilistic and/or machine learning models for audience segmentation, propensity scoring, content recommendations, and predictive analytics. Collaborate with Data Engineering and Cloud teams to build out event-driven and batch data flows using technologies such as Azure Data Factory, Databricks, Delta Lake, Azure Synapse, and Kafka. Lead the integration of MarTech data with enterprise data warehouses and data lakes, ensuring consistency, accessibility, and compliance. Translate business needs into scalable data models and transformation logic that empower marketing, analytics, and CX stakeholders. Establish data governance and quality frameworks, including metadata management, lineage tracking, and privacy compliance (GDPR, CCPA). Serve as a subject matter expert in both MarTech data architecture and advanced analytics capabilities. Who are you: Bachelor s or Master s degree in Computer Science, Data Engineering, or related field. 8+ years of experience in data engineering, analytics platforms, and data science applications roles, ideally within a MarTech, CX, or Digital environment. Hands-on experience with customer data platforms (CDPs) and integrating marketing data into enterprise ecosystems. Solid programming skills in Python, SQL, and experience working with Spark, ML pipelines, and ETL orchestration frameworks. Experience integrating marketing technology platforms (e.g., Iterable, Qualtrics, Dynamic Yield) into analytical workflows and consumer intelligence layers. Strong understanding of data privacy, consent management, and ethical AI practices. Excellent communication skills with the ability to influence and collaborate effectively across diverse teams and stakeholders. Experience in Agile development environments and working in distributed/global teams. Where youll be This is a full-time position, based in Bangalore, India. Benefits highlights Flexible work hours/hybrid work environment Discounts on our award-winning Electrolux products and services Family-friendly benefits Extensive learning opportunities and flexible career path.
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
We are seeking a strategic and technically adept Product Owner to lead our Data & Analytics initiatives, with a strong focus on Data Governance and Data Engineering . This role will be pivotal in shaping and executing the data strategy, ensuring high data quality, compliance, and enabling scalable data infrastructure to support business intelligence and advanced analytics. Data Governance Manager with extensive experience in data governance, quality management, and stakeholder engagement. Proven track record in designing and implementing global data standards and governance frameworks at Daimler Trucks. Expertise in managing diverse data sources from multiple domains and platforms. Skilled in tools such as Alation, Azure Purview , Informatica or similar products to build marketplace for Data Products. Excellent communication skills for managing global CDO stakeholders, policy makers, data practictioners Certifications in Agile (e.g., CSPO), Data Governance (e.g., DCAM), or Cloud Platforms. Experience with data cataloging tools (e.g., Informatica, Collibra, Alation) and data quality platforms. Key Responsibilities: Product Ownership & Strategy Define and maintain the product vision, roadmap, and backlog for data governance and engineering initiatives. Collaborate with stakeholders across business units to gather requirements and translate them into actionable data solutions. Prioritize features and enhancements based on business value, technical feasibility, and compliance needs. Data Governance Lead the implementation of data governance frameworks, policies, and standards. Ensure data quality, lineage, metadata management, and compliance with regulatory requirements (e.g., GDPR, CCPA). Partner with legal, compliance, and security teams to manage data risks and ensure ethical data usage. Data Engineering Oversee the development and maintenance of scalable data pipelines and infrastructure. Work closely with data engineers to ensure robust ETL/ELT processes, data warehousing, and integration with analytics platforms. Advocate for best practices in data architecture, performance optimization, and cloud-based data solutions. Stakeholder Engagement Act as the primary liaison between technical teams and business stakeholders. Facilitate sprint planning, reviews, and retrospectives with Agile teams. Communicate progress, risks, and dependencies effectively to leadership and stakeholders. Qualifications: Education & Experience Bachelor s or Master s degree in Computer Science, Information Systems, Data Science, or related field. 5+ years of experience in data-related roles, with at least 2 years as a Product Owner or similar role. Proven experience in data governance and data engineering within enterprise environments. Skills & Competencies Strong understanding of data governance principles, data privacy laws, and compliance frameworks. Hands-on experience with data engineering tools and platforms (e.g., Apache Spark, Airflow, Snowflake, AWS/GCP/Azure). Proficiency in Agile methodologies and product management tools (e.g., Jira, Confluence). Excellent communication, leadership, and stakeholder management skills.
Posted 1 month ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Senior Devops Engineer (Salesforce Platform) Would you like to be part of a team that excels in delivering high-quality Salesforce solutions to our customers? Are you a problem solver who enjoys working collaboratively to achieve business goals through Salesforce development? About the Role This role is pivotal to improving the reliability, scalability, and efficiency of our Salesforce DevOps ecosystem. You will lead the optimization of CI/CD pipelines using tools such as Copado CI/CD, Copado CRT, and Azure DevOps, ensuring best practices are applied and deployment issues are resolved proactively. The ideal candidate thrives in a collaborative yet independent environment and is passionate about improving systems, processes, and delivery. About the Team Join a dynamic and collaborative team dedicated to leveraging Salesforce to its fullest potential. Our team comprises skilled professionals, including developers, administrators, business stakeholders, and product teams, all working together to deliver top-notch Salesforce solutions. We foster a culture of continuous learning and growth, encouraging innovation and the sharing of best practices. As part of our team, youll have the opportunity to work on exciting projects, lead a talented group of individuals, and make a significant impact on our organizations success. Responsibilities Evaluate current DevOps practices and take ownership of improving deployment processes across the Salesforce landscape. Lead the configuration, optimization, and maintenance of automated CI/CD pipelines using Copado CI/CD, Copado CRT, and Azure DevOps. Troubleshoot and resolve Copado deployment errors that impact development and delivery timelines. Design and implement release automation, rollback procedures, version control integrations, and branching strategies. Collaborate closely with development, QA, product, and operations teams to ensure seamless deployments across environments (Development, QA, SIT, UAT, Production). Identify opportunities for automation and implement scripts or tools to reduce manual effort in deployment workflows. Support integration of automated testing into deployment pipelines to promote high-quality releases. Monitor and ensure DevOps practices align with organizational metrics and performance standards, assisting the team in maintaining high performance and stability. Maintain environment consistency and sandbox readiness, including sandbox refreshes, compliance requirements, automation, and seed data loading. Manage complex Salesforce metadata structures and dependencies, including CPQ metadata deployment strategies. Proactively engage with security and compliance teams to ensure DevOps practices align with data governance and audit requirements. Champion DevOps principles and promote collaboration between development and operations to foster a culture of continuous improvement. Evaluate and recommend tooling enhancements that improve DevOps efficiency and scalability. Stay current with industry best practices, emerging technologies, and Salesforce DevOps trends. Requirements Bachelor s degree in Computer Science, Software Engineering, or related field. 5+ years of experience in DevOps or release engineering, with 3+ years supporting Salesforce Sales Cloud. Strong Salesforce development experience including Apex, Lightning Web Components (LWC), and declarative configuration (Workflows, Process Builder, Flows). In-depth knowledge of Salesforce metadata structure and dependencies, especially in CPQ environments. Experience with deployment automation, rollback strategies, and metadata management. Proven expertise in Copado CI/CD and Copado CRT; experience managing pipelines and troubleshooting deployment blockers. Proficiency with Azure DevOps and Git-based version control systems. Experience with sandbox management, including refresh automation and environment compliance. Demonstrated ability to work independently and lead process improvements from concept through implementation. Strong problem-solving and communication skills, with the ability to clearly articulate technical solutions to cross-functional teams. Copado Certifications Fundamentals I & II, Copado Developer, Copado Consultant Salesforce Certifications Platform Developer I, Administrator, App Builder Experience with Own Backup or similar backup and recovery solutions for Salesforce. Experience working in Agile environments, particularly with cross-functional delivery teams. Familiarity with additional DevOps and automation tools (e.g., Jenkins, Gearset, Selenium). Knowledge of monitoring tools and strategies to improve observability of deployments and environments. Work in a way that works for you We promote a healthy work/life balance across the organization. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals. Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working for you We know that your wellbeing and happiness are key to a long and successful career. These are some of the benefits we are delighted to offer Comprehensive Health Insurance Covers you, your immediate family, and parents. Enhanced Health Insurance Options Competitive rates negotiated by the company. Group Life Insurance Ensuring financial security for your loved ones. Group Accident Insurance Extra protection for accidental death and permanent disablement. Flexible Working Arrangement Achieve a harmonious work-life balance. Employee Assistance Program Access support for personal and work-related challenges. Medical Screening Your well-being is a top priority. Modern Family Benefits Maternity, paternity, and adoption support. Long-Service Awards Recognizing dedication and commitment. New Baby Gift Celebrating the joy of parenthood. Various Paid Time Off Take time off with Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. About the Business LexisNexis Legal & Professional provides legal, regulatory, and business information and analytics that help customers increase their productivity, improve decision-making, achieve better outcomes, and advance the rule of law around the world. As a digital pioneer, the company was the first to bring legal and business information online with its Lexis and Nexis Services. We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers EEO Know Your Rights .
Posted 1 month ago
5.0 - 10.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Job Description As a Technical Writer on our Enterprise Applications team, youll create, maintain, and enhance documentation for our suite of IT services. This role requires a deep understanding of Enterprise application concepts and the ability to translate complex technical information into clear, concise, and user-friendly documentation for both Business and IT professionals. Create training content for a wide range of audiences with varying degrees of knowledge and experience. We expect you to have a proven track record of producing effective help content that drives business adoption by empowering users to effectively self-serve their learning objectives. We re looking for an individual who shares our passion about technical documentation and customer education. Help us train the next generation tech professionals with clear and crisp writing on technical topics that are elaborated in a simple and easy to understand way. What you will do ! Create technical setup guides, operational runbooks, handbook pages, FAQs, and technical white papers for client products and services. Incorporate structured content principles for single-source of truth. Create end-user documentation for new product launches and features across the teams for Enterprise Applications. Identify and close knowledge gaps alongside our application & Product teams. Contribute your product and content expertise to the development of eLearning materials including training videos, brochures, infographics, and other content. Effectively communicate and collaborate with internal stakeholders and subject matter experts to deliver engaging customer-facing content. Review success metrics across content delivery methods. What you will need 5+ years of Enterprise Application, security, and IT tech writing experience. 5+ years of experience in a writing role focused on end-user documentation for a technology company. Bachelors degree in English, Technical Communication, Computer Science, Information Technology, or a related field. Proven experience as a technical writer in the IT industry, with a strong portfolio of documentation samples. Technical proficiency : Strong understanding of IT concepts, systems, and technologies. Proficiency in writing structured content that uses variables and metadata to serve context-sensitive material to unique audiences. Experience supporting product release cycles in a fast-paced, ambiguous environment. Excellent writing skills : Content is clear, succinct, logical, and easy to understand. Strong organizational skills : ability to manage expectations and maintain focus. Curiosity is a bias to constantly question, dig deeper, and learn. Nice to have Experience writing Wiki style articles on wide range of tech topics.
Posted 1 month ago
5.0 - 9.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Career Category Information Systems Job Description ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Role Description: The External Data Analyst will be responsible for optimizing spend and reuse of external data. This role is responsible for maintaining a data catalog with harmonized metadata across functions to increase visibility, promote reuse, and lower the annual spend. The External Data Analyst will assess investments in external data and will provide recommendations to the Enterprise Data Council to inform investment approval. This role will work with Global Strategic Sourcing and the Cyber Security Team to standardize contracting of data purchases. The External Data Analyst will also work closely with the data engineering team and external data providers to manage the lifecycle of the data assets. This role will be responsible for co-defining and operationalizing the business process to capture metadata related to the forecast of data purchases. The person in this role will coordinate activities at the tactical level, interpreting Enterprise Data Council direction and defining operational level impact deliverables and actions to maximize data investments. Roles & Responsibilities: Responsible for cataloging all external data assets, including the harmonization of metadata to increase reuse and inform future data acquisitions. Co-develop and maintain the process to consistently capture external data purchase forecast, focusing on generating the required metadata to support KPIs and reporting. Responsible for working with Global Strategic Sourcing and Cyber Security teams to standardize data contracts to enable the reuse of data assets across functions. In partnership with functional data SMEs, develop internal expertise on the content of external data to increase reuse across teams. This includes, but is not limited to, participating in data seminars to bring together data SMEs from all functions to increase data literacy. In partnership with the Data Engineering team, design data standardization rules to make external data FAIR from the start. Maintain the quality of data. In partnership with the Data Privacy and Policy team develop and operationalize data access controls to adhere to the terms of the data contracts to ensure data access controls, compliance, and security requirements are enforced. Maintain policies and ensure compliance with data privacy, security, and contractual policies Publish metrics to measure effectiveness of data reuse, data literacy and reduction in data spend. Functional Skills: Must-Have Skills: Experience managing external data assets used in the life-science industry (e. g. , Claims, EHR, etc. ) Experience working with data providers, supporting negotiations and vendor management activities. Technical data management skills with in-depth knowledge of Pharma data standards and regulations. Aware of industry trends and priorities and can apply to governance and policies. Experience with data products development life cycle, including the enablement of data dictionaries, business glossary to increase data products reusability and data literacy. Good-to-Have Skills: Ability to successfully execute complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e. g. , Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Strong attention to detail, quality, time management and customer focus. Basic Qualifications: Any degree with 5 - 9 years of experience in Business, Engineering, IT or related field EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request and accommodation. Ready to Apply for the Job We highly recommend utilizing Workdays robust Career Profile feature to complete the application process. A link to update your profile is available when you click Apply . You can then complete your Workday profile in minutes with the Upload My Experience functionality to upload an updated copy of your resume or you can simply edit the individual sections of your Career Profile. Please note that you should be in your current position for at least 18 months before applying to internal positions. Staff must notify their current manager if invited for an interview. In addition, Staff are ineligible to apply for open positions if (a) their performance is currently being managed on a performance improvement plan (PIP) or other locally utilized formal coaching document or (b) their most recent performance rating was not a Partially Meets Expectations or higher. Please visit our Internal Transfer Guidelines for more detailed information GCF Level GCF Level 04A .
Posted 1 month ago
7.0 - 12.0 years
15 - 30 Lacs
Gurugram, Delhi / NCR
Work from Office
Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).
Posted 1 month ago
7.0 - 12.0 years
15 - 30 Lacs
Gurugram
Hybrid
Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).
Posted 1 month ago
1.0 - 4.0 years
3 - 6 Lacs
Hyderabad
Work from Office
We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. Role Purpose Matillion is a fast paced hyper-scale software development company. You will be based in India but working with colleagues globally specifically across the US, the UK and in Hyderabad. The Enterprise data team is responsible for producing Matillions reporting metrics and KPIs. We work closely with Finance colleagues, the product team and Go To Market to interpret the data that we have and provide actionable insight across the business. The purpose of this role Is to Increase the value of strategic information from the data warehouse, Salesforce, Thoughtspot, and DPC Hub by providing a context for the data aiding analysts to make more effective decisions Reduce training costs by documenting data context, history and origin Reduce time-to-value of data analytics by assisting analysts find the information they need Improve communication between data users in data engineering, Go-To-Market, Product and Finance Identify and reduce redundant data and processes. What will you be doing? Create and maintain meta-data for our core data assets Manage the metadata repository, Confluence Define and implement review schedule for metadata Agree metadata with relevant stakeholders Define data quality criteria for each data asset Build data quality report for each data asset Build a data quality report to report on overall data quality Working with our AI tools, specifically Gemini & MAIA to automate and reduce the effort needed to maintain our data artefacts Identify data to be archived/delete based on the defined retention policy What are we looking for? Experience of Netsuite and Salesforce Knowledge of Kimbal modelling methodology Knowledge of Snowflake and Thoughtspot Ability to work with business stakeholders Ability to work with technical stakeholders Excellent stakeholder management skills Ability to negotiate with stakeholders with different objectives Ability to document work clearly and concisely Experience of working with remote teams.
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Pune
Work from Office
Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: FileNet Developer Good to have knowledge of any other content management repository Good inter-personal skills Ability to work independently without supervision Micro Services development and deployment into Cloud GKE and IKP Help other teammates technical queries and enable them to deliver tasks. Managing production releases and deployment. Adopt and follow the Agile and DevOps principles. Communicate effectively with various Business & IT teams. Collaborate effectively with global members to achieve desired outcomes. Should be open to learn new technologies based on project requirements. Requirements To be successful in this role, you should meet the following requirements: FileNet Metadata Configuration & Deployment Experience using FDM Experience with writing custom code module with Java Filenet API and CMIS Webservices. ICN Customization and Plugin development (EDS, Action Plugin,etc) Experience with IBM Content Collector (ICC). Experience with Source Code Repository (e.g-GITHub) Strong Java/J2EE skills (especially with building and Consuming RESTful services) Good to have experience with FileNet Container deployed on Cloud Hands on experience or understanding on DevOps related Terraform, Groovy scripts and Jenkins CI/CD. Familiar with Internal Kubernetes Platform (IKP) Familiar with PL/SQL, PostgreSQL, Unix shell scripting. Good communication skills and be a team player. Ability to prioritise and work independently within a diverse team environment. Good to have knowledge on genAI, VectorDB and Python. Well aware of change process raising CRs, CAB reviews etc You ll achieve more when you join HSBC. .
Posted 1 month ago
5.0 - 12.0 years
50 - 60 Lacs
Gurugram
Work from Office
Manager - ASO Manager ASO About Junglee Games With over 140 million users, Junglee Games is a leader in the online skill gaming space. Founded in San Francisco in 2012 and part of the Flutter Entertainment Group, we are revolutionising how people play games. Our notable games include Howzat, Junglee Rummy, and Junglee Poker. Our team comprises over 1000 talented individuals who have worked on internationally acclaimed AAA titles like Transformers and Star Wars: The Old Republic and contributed to Hollywood hits such as Avatar. Junglee s mission is to build entertainment for millions of people around the world and connect them through games. Junglee Games is not just a gaming company but a blend of innovation, data science, cutting-edge tech, and, most importantly, a values-driven culture that is creating the next set of conscious leaders. Job overview As a Manager ASO, you will play a key role in driving the visibility, discoverability, and success of our mobile games in the app stores. We are seeking a talented and experienced Senior App Store Optimisation (ASO) Specialist to join our growing team. As a Manager ASO, you will play a key role in driving the visibility, discoverability, and success of our mobile games in the app stores. Requirements : 5-12 years of experience in App Store Optimization (ASO) for mobile apps, preferably in the gaming industry. Bachelor & degree in Marketing, Business, Computer Science, or a related field Proven track record of success in driving organic app downloads, revenue, and user engagement through ASO strategies. Deep understanding of app store ecosystems, algorithms, and best practices for iOS and Android platforms. Expertise in ASO tools and analytics platforms, such as App Annie, Sensor Tower, and Google Play Console. Strong analytical skills with the ability to interpret data, identify trends, and draw actionable insights. Excellent communication, collaboration, and project management skills. Creative thinking and problem-solving abilities with a passion for mobile gaming and technology. Responsibilities: 1. Develop and execute comprehensive ASO strategies to optimise app store listings and drive organic downloads and revenue. 2. Conduct in-depth keyword research and analysis to identify high-potential Keywords and phrases for app optimisation. 3. Optimize app metadata, including app titles, descriptions, keywords, and screenshots, to improve search rankings and conversion rates. 4. Monitor app store rankings, user reviews, and competitor activity to identify trends, insights, and opportunities for optimization. 5. Collaborate with cross-functional teams, including product development, marketing, and data analytics, to align ASO strategies with business goals and priorities. 6. Stay up-to-date on the latest trends, best practices, and algorithm changes in ASO and mobile app marketing. 7. Analyze ASO performance metrics and provide regular reports, insights, and recommendations for optimization. 8. Lead experimentation and A/B testing initiatives to optimize app store listings and drive continuous improvement. Be a part of Junglee Games to: Value Customers & Data - Prioritise customers, use data-driven decisions, master KPIs, and leverage ideation and A/B testing to drive impactful outcomes. Inspire Extreme Ownership - We embrace ownership, collaborate effectively, and take pride in every detail to ensure every game becomes a smashing success. Lead with Love - We reject micromanagement and fear, fostering open dialogue, mutual growth, and a fearless yet responsible work ethic. Embrace change - Change drives progress and our strength lies in adapting swiftly and recognising when to evolve to stay ahead Play the Big Game - We think big, challenge norms, and innovate boldly, driving impactful results through fresh ideas and inventive problem-solving. Know more about us Explore the world of Junglee Games through our website, www.jungleegames.com . Get a glimpse of what Life at Junglee Games looks like on LinkedIn . Here is a quick snippet of the Junglee Games Offsite 24 Liked what you saw so far Be A Junglee
Posted 1 month ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Voyager (94001), India, Bangalore, Karnataka Principal Associate - Senior Software Engineer Ever since our first credit card customer in 1994, Capital One has recognized that technology and data can enable even large companies to be innovative and personalized. As one of the first large enterprises to go all-in on the public cloud, Capital One needed to build cloud and data management tools that didn t exist in the marketplace to enable us to operate at scale in the cloud. And in 2022, we publicly announced Capital One Software and brought our first B2B software solution, Slingshot, to market. Building on Capital One s pioneering adoption of modern cloud and data capabilities, Capital One Software is helping accelerate the data management journey at scale for businesses operating in the cloud. If you think of the kind of challenges that companies face - things like data publishing, data consumption, data governance, and infrastructure management - we ve built tools to address these various needs along the way. Capital One Software will continue to explore where we can bring our solutions to market to help other businesses address these same needs going forward. We are seeking top tier talent to join our pioneering team and propel us towards our destination. You will be joining a team of innovative product, tech, and design leaders that tirelessly seek to question the status quo. As a Lead Software Engineer, you ll have the opportunity to be on the forefront of building this business and bring these tools to market. As a Senior Software Engineer - Data Management, you will: Help build innovative products and solutions for problems in the Data Management domain Maintain knowledge on industry innovations, trends and practices to curate a continual stream of incubated projects and create rapid product prototypes Basic Qualifications Bachelors Degree in Computer Science or a related field At least 5 years of professional software development experience (internship experience does not apply) At least 2 years of experience in building software solutions to problems in one of the Data Management areas listed below: Data Catalog / Metadata Store Access Control / Policy Enforcement Data Governance Data Lineage Data Monitoring and Alerting Data Scanning and Protection Atleast 2 years of experience in building software using at least 1 of the following: Golang, Java, Python, Rust, C++ Atleast 2 years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) Preferred Qualifications Masters Degree in Computer Science or a related field Atleast 7 years of professional software development experience (internship experience does not apply) Software development experience in a commercial Data Management product that is being built from the ground up Experience in supporting a commercial Data Management product in cloud with Enterprise clients . At this time, Capital One will not sponsor a new applicant for employment authorization for this position . No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City s Fair Chance Act; Philadelphia s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 1 month ago
4.0 - 6.0 years
8 - 11 Lacs
Noida, Hyderabad, Pune
Work from Office
Job Summary: We are seeking skilled Data Governance Specialist to develop and implement data governance frameworks, policies, and standards. You will ensure data quality, compliance, and security across the organization while collaborating with data owners, stewards, and technical teams. This role involves managing data catalogs, metadata, and classification, and supporting initiatives to enhance data integrity and regulatory compliance. Key Responsibilities: * Exposure to both AWS and Azure cloud platforms, along with Databricks. * Develop, implement, and maintain data governance frameworks, policies, and standards. * Define and enforce data quality, data integrity, and data security best practices across the organization. * Collaborate with data owners, stewards, and technical teams to ensure data governance objectives are met. * Manage and maintain enterprise data catalogs and metadata repositories. * Monitor data compliance with internal policies and external regulations (e.g., GDPR, CCPA). * Identify and resolve data governance issues, ensuring accurate and reliable data assets. * Conduct data classification and support data lineage documentation. * Provide training and guidance to teams on data governance processes and best practices. * Report on data governance metrics and recommend improvements to strengthen governance maturity. Role Requirements and Qualifications: * Proven experience implementing and managing data governance frameworks, policies, and standards. * Strong understanding of data quality, metadata management, and data stewardship practices. * Experience working with data catalogs, data lineage, and data classification tools. * Solid knowledge of data privacy and compliance regulations (e.g., GDPR, CCPA) and how they impact data management. * Familiarity with cloud data platforms (e.g., AWS, Azure, GCP) and modern data environments. * Ability to collaborate with cross-functional teams including data owners, stewards, engineers, and compliance stakeholders. * Strong analytical and problem-solving skills to identify and resolve data governance issues. * Excellent communication skills to train and guide teams on data governance best practices. * Proficiency with data governance or metadata tools (e.g., Collibra, Alation, Informatica) is a plus. * Bachelor s or Master s degree in Information Management, Data Science, Computer Science, or a related field (or equivalent practical experience). Why Join Us: * Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors. * Continuous investment in employee growth and professional development with a strong focus on up & re-skilling. *Competitive compensation & benefits, ESOPs and international assignments. * Supportive environment with healthy work-life balance and a focus on employee well-being. * Open culture that values diverse perspectives, encourages transparent communication and rewards contributions. How to Apply: If you are interested in joining our team and meet the qualifications listed above, please apply and submit your resume highlighting why you are the ideal candidate for this position.
Posted 1 month ago
12.0 - 17.0 years
16 - 20 Lacs
Hyderabad
Work from Office
Principal / Senior Systems Performance Engineer Micron Data Center and Client Workload Engineering in Hyderabad, India, is seeking a senior/principal engineer to join our dynamic team. The successful candidate will primarily contribute to the ML development, ML DevOps, HBM program in the data center by analyzing how AI/ML workloads perform on the latest MU-HBM, Micron main memory, expansion memory and near memory (HBM/LP) solutions, conduct competitive analysis, showcase the benefits that workloads see with MU-HBM s capacity / bandwidth / thermals, contribute to marketing collateral, and extract AI/ML workload traces to help optimize future HBM designs. Job Responsibilities: The Job Responsibilities include but are not limited to the following: Design, implement, and maintain scalable & reliable ML infrastructure and pipelines. Collaborate with data scientists and ML engineers to deploy machine learning models into production environments. Automate and optimize ML workflows, including data preprocessing, model training, evaluation, and deployment. Monitor and manage the performance, reliability, and scalability of ML systems. Troubleshoot and resolve issues related to ML infrastructure and deployments. Implement and manage distributed training and inference solutions to enhance model performance and scalability. Utilize DeepSpeed, TensorRT, vLLM for optimizing and accelerating AI inference and training processes. Understand key care abouts when it comes to ML models such as: transformer architectures, precision, quantization, distillation, attention span & KV cache, MoE, etc. Build workload memory access traces from AI models. Study system balance ratios for DRAM to HBM in terms of capacity and bandwidth to understand and model TCO. Study data movement between CPU, GPU and the associated memory subsystems (DDR, HBM) in heterogeneous system architectures via connectivity such as PCIe/NVLINK/Infinity Fabric to understand the bottlenecks in data movement for different workloads. Develop an automated testing framework through scripting. Customer engagements and conference presentations to showcase findings and develop whitepapers. Requirements: Strong programming skills in Python and familiarity with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience in data preparation: cleaning, splitting, and transforming data for training, validation, and testing. Proficiency in model training and development: creating and training machine learning models. Expertise in model evaluation: testing models to assess their performance. Skills in model deployment: launching server, live inference, batched inference Experience with AI inference and distributed training techniques. Strong foundation in GPU and CPU processor architecture Familiarity with and knowledge of server system memory (DRAM) Strong experience with benchmarking and performance analysis Strong software development skills using leading scripting, programming languages and technologies (Python, CUDA, C, C++) Familiarity with PCIe and NVLINK connectivity Preferred Qualifications: Experience in quickly building AI workflows: building pipelines and model workflows to design, deploy, and manage consistent model delivery. Ability to easily deploy models anywhere: using managed endpoints to deploy models and workflows across accessible CPU and GPU machines. Understanding of MLOps: the overarching concept covering the core tools, processes, and best practices for end-to-end machine learning system development and operations in production. Knowledge of GenAIOps: extending MLOps to develop and operationalize generative AI solutions, including the management of and interaction with a foundation model. Familiarity with LLMOps: focused specifically on developing and productionizing LLM-based solutions. Experience with RAGOps: focusing on the delivery and operation of RAGs, considered the ultimate reference architecture for generative AI and LLMs. Data management: collect, ingest, store, process, and label data for training and evaluation. Configure role-based access control; dataset search, browsing, and exploration; data provenance tracking, data logging, dataset versioning, metadata indexing, data quality validation, dataset cards, and dashboards for data visualization. Workflow and pipeline management: work with cloud resources or a local workstation; connect data preparation, model training, model evaluation, model optimization, and model deployment steps into an end-to-end automated and scalable workflow combining data and compute. Model management: train, evaluate, and optimize models for production; store and version models along with their model cards in a centralized model registry; assess model risks, and ensure compliance with standards. Experiment management and observability: track and compare different machine learning model experiments, including changes in training data, models, and hyperparameters. Automatically search the space of possible model architectures and hyperparameters for a given model architecture; analyze model performance during inference, monitor model inputs and outputs for concept drift. Synthetic data management: extend data management with a new native generative AI capability. Generate synthetic training data through domain randomization to increase transfer learning capabilities. Declaratively define and generate edge cases to evaluate, validate, and certify model accuracy and robustness. Embedding management: represent data samples of any modality as dense multi-dimensional embedding vectors; generate, store, and version embeddings in a vector database. Visualize embeddings for improvised exploration. Find relevant contextual information through vector similarity search for RAGs. Education: Bachelor s or higher (with 12+ years of experience) in Computer Science or related field. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidates true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough