Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
14 - 19 Lacs
Bengaluru
Work from Office
We are seeking a detail-oriented Data Architect to design and implement robust data architecture and frameworks that support enterprise-level platforms. These platforms serve as central repositories for product asset data, enabling data-driven decision-making and supporting both field and digital services. Required Skills: Expertise in data modeling and design principles Proficiency in database management systems (e.g., SQL, NoSQL) Strong analytical and problem-solving skills Experience with data warehousing and ETL processes Experience with BI tools such as Power BI, Tableau, Quicksight Ability to create and manage complex data models Excellent communication and collaboration skills Experience with data governance and data security practices Good knowledge of Master Data Management (MDM) Familiarity with Salesforce environments Experience with cloud platforms such as AWS Basic knowledge of integration platforms (e.g., SnapLogic, Informatica CAI) Qualifications: 7+ years of experience in data architecture, database design, or related roles 5+ years of experience designing and implementing enterprise data solutions Proven experience with data modeling and architecture design methodologies Demonstrated experience with both relational and non-relational database systems Experience leading data architecture for major transformation initiatives or system implementations Track record of translating business requirements into effective data solutions Bachelors degree in Computer Science, Information Systems, or a related field. Create blueprints for data management systems that ensure scalability, performance, and alignment with business needs Define and enforce data standards, principles, and models across the organization Ensure data quality, security, and governance are maintained at all levels Collaborate with business and technical stakeholders to align data strategies with enterprise goals Maintain consistency in data architecture practices and documentation Improve system performance through testing, troubleshooting, and integration of new technologies Lead data migration and transformation initiatives during major deployments
Posted 3 days ago
3.0 - 6.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job Title: Data Governance Associate Location: Hyderabad Experience: 4-6 Years Key Responsibilities: 1. Data Quality and Master Data Management: Assist in the development and implementation of data quality frameworks and master data management processes. Monitor and report on data quality metrics, identifying areas for improvement and ensuring compliance with data governance standards. 2. Data Object Dictionary: Support the creation and maintenance of a comprehensive data object dictionary to ensure that data assets are well-documented and easily accessible. Collaborate with cross-functional teams to standardize terminology and enhance understanding of data objects across the organization. 3. Data Governance Support: Assist in the execution of data governance initiatives, including data stewardship and data lifecycle management. Participate in data governance meetings and contribute to the development of policies and procedures that promote data integrity and compliance. 4. Collaboration and Communication: Work closely with data owners, data stewards, and other stakeholders to ensure alignment on data governance practices. Communicate effectively with technical and non-technical teams to promote data governance awareness and best practices. Qualifications: Education: Bachelor s degree in Computer Science, Information Management, or a related field. Experience: 3 to 4 years of experience in data governance, data quality, and master data management. Familiarity with data object dictionaries and data documentation practices. Experience in monitoring and improving data quality metrics. Technical Skills: Basic proficiency in SQL for querying and extracting data from databases. Familiarity with data governance tools and platforms is a plus. Understanding of data integration techniques and tools. Soft Skills: Strong analytical and problem-solving skills with attention to detail. Excellent communication skills, with the ability to convey complex data concepts clearly. Ability to work collaboratively in a team environment and manage multiple tasks effectively. Preferred Qualifications: Experience in supporting data governance initiatives and projects. Knowledge of data quality principles and practices
Posted 3 days ago
10.0 - 20.0 years
12 - 16 Lacs
Gurugram
Work from Office
The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. What we offer Interact with senior stakeholders at our clients on a regular basis to drive their business towards impactful change. Become the go-to person for end-to-end Infrastructure management and deployment processes. Lead your team in supporting data management, data visualization, and analytics products teams to deliver optimal solutions. Become part of a fast-growing international and diverse team What you will do Assist in the creation and maintenance of data pipelines for data management, visualization, and analytics products. Support the design of platform infrastructure, network, and security, and collaborate with senior engineers to ensure security, compliance, and cost efficiency in cloud environments. Support the automation of infrastructure provisioning and configuration management using Terraform. Help ensure our data platform operates efficiently and remains online. Collaborate with senior team members to develop and deploy automated services and APIs. Gain experience with Unix/Linux and network fundamentals. Support deployment processes with CI/CD tools like Azure DevOps, GitHub Actions. Work with Azure and its data platform and analytics components. Develop skills in automation and open-source tools. What you ll bring Understanding of IT operations and process re-engineering. 3+ years of experience in CloudOps, DevOps, or IT infrastructure. Familiarity with Microsoft Azure, cloud fundamentals and Azure DevOps (other cloud platforms is a plus). You re proficient with scripting languages (e.g. Python, YAML, git, bash, PowerShell etc.) You re proficient with CI/CD tools (e.g. Azure DevOps Pipelines, GitHub Actions) Azure certifications (AZ-104, AZ-400, AZ-305, or DP-203) are recommended and will be considered a plus. Experience working in azure data platform projects Familiarity with software engineering best practices in coding, software design patterns, testing & debugging Strong problem-solving skills and a willingness to learn. Interest in learning about data analytics and cloud operations. Ability to work collaboratively in a team environment. Good to have You have experience with testing & data quality (e.g. pytest, great expectations, etc.) You have experience of working in an Agile environment You have experience with data processing frameworks like Spark and Databricks environment as DevOps or DataOps. In a changing world, diversity and inclusion are core values for team well-being and performance. At Metyis, we want to welcome and retain all talents, regardless of gender, age, origin or sexual orientation, and irrespective of whether or not they are living with a disability, as each of them has their own experience and identity. Apply to this position 1 - 2 years of experience 3 - 5 years of experience 6 - 9 years of experience +10 years of experience Do you need, or will you need in the future, visa sponsorship for working in the country where the role you ve applied to is located What is/was your last/current annual gross compensation(in your local currency) * In order to accept a new job, what would be your gross annual salary expectation(in your local currency) * In terms of notice period, how much would you need to give your current employer, if you decided to leave the company *
Posted 3 days ago
1.0 - 3.0 years
6 - 10 Lacs
New Delhi, Bengaluru
Work from Office
About Us: Good Business Lab is an independent, non-profit labor innovation company. We use rigorous academic research to prove that worker well-being programs have business impacts. We develop market-ready, scalable interventions that benefit both workers and businesses. Our goal is to disrupt the traditional notion of business and show that worker well-being can be a good business practice. The founders of the lab are Ach Adhvaryu - Professor of Economics and Director of 21st Century India Center at the School of Global Policy and Strategy, UC San Diego ( www.achadhvaryu.com ), Anant Ahuja-head of Organization Development at Shahi Exports Pvt. Ltd., and Anant Nyshadham- Assistant Professor of Business Economics and Public Policy at the University of Michigan ( www.anantnyshadham.com ). Role: Design Associate Location: Delhi / Bengaluru / Remote in India Start date: ASAP (applications being accepted on a rolling basis) Salary: 6.5-10 LPA (depending on experience) Length of Commitment: Minimum of 12 months About the role: The Design Associate (Qualitative Research) will work on all phases of a typical research cycle: from conception, desk research, and fieldwork, to analysis and report creation. They will also support the team in solution design and ideation, presenting our work to internal and external stakeholders, creating training modules, etc., as required. Key responsibilities Work on all stages of the qualitative research cycle: Conduct in-depth literature reviews tapping into different types of sources depending on the project. Conduct fieldwork of qualitative interviews and FGDs with stakeholders across various projects, making regular field visits (can be to both urban and rural areas) to collect data. Plan fieldwork and data collection for projects based on the requirements. Create or support the creation and review of the data collection tools (qualitative interview questionnaires). Identify participants based on apt sampling techniques. Work on the qualitative data analysis and synthesis process. Coordinate with the stakeholders, manage field issues, and align the workflow within their premises. Ensure the planning, managing, and following of ethical data collection practices and ensure good data quality throughout the project, especially when on the field. Maintain relations with clients/partners and stakeholders and perform additional duties as needed. Actively participate in and conduct brainstorming sessions for ideation and prototyping of solutions using a participatory design approach. Create and assist in writing the reports of the work undertaken and other outcome collaterals. Plan and create training materials and modules for relevant projects as required. Work closely with other team members in collaboration to create comprehensive collaterals as required. Create and assist in creating blogs and write-ups, including thought pieces and field experiences. Work with the team closely on a wide range of preparatory work for upcoming projects. Ensure project documentation is well-maintained, including designing, maintaining, and tracking field reports/project logs in Google documents and spreadsheets, along with drafting and developing materials, manuals, guidelines, and protocols per project requirements and under the supervision of the senior team members. Who are you A graduate with a minimum of 2-3 years of work experience or a postgraduate with a minimum of 1 year of work experience in qualitative research. A degree in social sciences (Economics, Development Studies, International Development, Anthropology, Psychology, Behavioral Sciences, Sociology, Social Work, etc.) or allied areas. Demonstrated hands-on experience with all aspects of the qualitative research cycle (including fieldwork, literature review, data collection, data analysis, and report writing). Proficient knowledge of QDAS tools such as NVivo/Dedoose/Atlas.ti or other relevant tools. Experience using at least two or more qualitative research design methods such as ethnography, case study, grounded theory, phenomenology, narrative inquiry, etc. Comfortable working with a wide range of stakeholders, including groups with little or no background in qualitative research and design. Excellent interpersonal and written, visual, and verbal communication skills. Passionate about tackling complex social and organizational challenges. Ability to work in a team, manage multiple projects on the ground, review and prioritize work independently, and be self-motivated. Ability to complete assigned tasks and meet deadlines while maintaining high-quality work. Preferable but Essential Qualifications: Proven academic writing and/or grant writing experience. Experience or interest in working with design research tools and software (e.g., Miro, Dovetail, etc). A research portfolio or writing samples demonstrating your qualitative research work and skills. Basic knowledge of survey data collection tools and techniques like SurveyCTO, Google Forms, etc. Strong willingness to learn new tasks and methodologies. Ability to work with minimal supervision and with due diligence. Also, we know it s tough, but please try to avoid the confidence gap . You don t have to match all the listed requirements exactly to be considered for this role. What should you be comfortable with A dynamic environment with competing priorities. Working within a global team with shared responsibilities. Independently coordinating with coworkers to accomplish goals. Being resourceful in new environments and scenarios. Problem-solving in hi-pressure environments. Perks of working with us There are plenty of benefits at GBL, here are some examples: Flexible leave policy: Time away from work can be extremely helpful for maintaining a healthy work/life balance. GBL encourages managers and leadership to set the example by taking time off when needed and ensuring their team members do the same. We dont have a strict limit on paid leaves, only suggested ( extremely liberal) averages. Flexible working hours: We recognize that a better work-life balance can improve employee motivation, performance, productivity, and reduce stress. The basis of our norms pertaining to this is a system of trust in each other and our common goals. GBL Care Systems: As an organization, we are committed to ensuring the wellbeing of our team members and creating a thriving work environment- because that gives us, together, the best chance at achieving our shared mission and sparking joy at work. We do this by partnering with organizations such The Mindclan, Therapize among others for workshops and other wellbeing-related initiatives. Growth-oriented review policy: To foster collaboration, we have adopted regular reviews and check-ins among team members. We see a managers role beyond what is expected from them by conventional management thinkers. Apart from delivering high-quality work, managers are responsible for the holistic development of their team members. This can be achieved through practices inspired by coaching philosophy. Additional benefits Wellbeing budget: This includes an individual budget for each team member that they can claim reimbursement for things such as therapy, any physical-health related activity and home office setup. Additionally, theres a separate budget for Managers for care packages or any other team activities. Theres also a budget for our People Operations team to organize team-wide activities or provide mental health services in collaboration with organizations like Therapize and Mindclan. Informal virtual and in-person hangs and activities! Recent projects and blog posts: To acclimatize yourself with some of our work, you can read our blog posts on Medium , and also go through our LinkedIn , Facebook , Twitter , and Instagram. The process: We are glad you re interested in applying for this role! After each step, we decide whether to invite you to the next one. Our interview process for this role has the following steps: CV screening Phone call screening First Round Interview Second Round Interview Depending on the candidate pool, we may add any additional interviews to make a well thought through decision. Our commitment to diversity: GBL is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, national origin, religion, sexual orientation, gender, gender identity, age, physical disability, or length of time spent unemployed. We celebrate diversity and are committed to creating an inclusive environment for all employees. We offer traditional monetary workplace benefits such as insurance and travel allowance. We are a young and growing company making us the ideal ground for team members to experiment, take on dynamic roles, and grow with us. We focus on happiness, output, and quality of work. If you have a disability or special need that requires accommodation, please let us know during the recruiting process. Note : By clicking on the apply for this job button, you confirm that you understand and accept GBL s Privacy Policy . You also understand that GBL has zero-tolerance against sexual harassment/ exploitation /abuse/misconduct ("SEA"). You confirm and declare that you have never been convicted by any court of law and/or you have never been subjected to any sanctions or inquiry or proceedings (be it disciplinary, administrative, civil, or criminal) arising from an investigation in relation to sexual harassment/abuse/misconduct or left employment pending investigation and refused to cooperate in such an investigation.
Posted 3 days ago
2.0 - 3.0 years
8 - 11 Lacs
Bengaluru
Work from Office
At Daimler Truck, we change today s transportation and create real impact together. We take responsibility around the globe and work together on making our vision become reality: Leading Sustainable Transportation. As one global team, we drive our progress and success together - everyone at Daimler Truck makes the difference. Together, we want to achieve a sustainable transportation, reduce our carbon footprint, increase safety on and off the track, develop smarter technology and attractive financial solutions. All essential, to fulfill our purpose - for all who keep the world moving. Become part of our global team: You make the difference - YOU MAKE US This team is core of Data & AI department for daimler truck helps developing world class AI platforms in various clouds(AWS, Azure) to support building analytics solutions, dashboards, ML models and Gen AI solutions across the globe. Required Skills & Qualifications: Bachelor s degree in Computer Science, Information Systems, or a related field. 2-3 years of experience in data engineering or a similar role. Strong hands-on experience with Snowflake (data modeling, performance tuning, SnowSQL, etc.). Proficiency in SQL and experience with scripting languages like Python or Shell . Experience with ETL/ELT tools such as dbt , Apache Airflow , Informatica , or Talend . Familiarity with cloud platforms (AWS, Azure, or GCP) and services like S3, Lambda, or Data Factory. Understanding of data warehousing concepts and best practices. Candidate should have excellent communication skills, willing to reskill, adopt and build strong stakeholder relationship An active team member, willing to go the miles and bring innovation at work ABOUT US You don t bring everything with youNo problem! We look for skills but hire for attitude! #MAKEYOURMOVE and apply now - we re looking forward to it! At Daimler Truck, we promote diversity and stand for an inclusive corporate culture. We value the individual strengths of our employees, because these lead to the best team performance and thus to the success of our company. Inclusion and equal opportunities are important to us. We welcome applications from people of all cultures and genders, parents, people with disabilities and people of any community. ADDITIONAL INFORMATION We particularly welcome online applications from candidates with disabilities or similar impairments in direct response to this job advertisement. If you have any questions, you can contact the local disability officer once you have submitted your application form, who will gladly assist you in the onward application process: XXX@daimlertruck.com If you have any questions regarding the application process, please contact HR Services by e-mail: hrservices@daimlertruck.com. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Snowflake and other cloud-based tools. Implement data ingestion, transformation, and integration processes from various sources (e.g., APIs, flat files, databases). Optimize Snowflake performance through clustering, partitioning, and query tuning. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data pipelines and storage. Develop and maintain documentation related to data architecture, processes, and best practices. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Working experience with tools like medallion architecture, Matillion, DBT models, SNP Glu are highly recommended WHAT WE OFFER YOU Note: Fixed benefits that apply to Daimler Truck, Daimler Buses, and Daimler Truck Financial Services. Among other things, the following benefits await you with us: Attractive compensation package Company pension plan Remote working Flexible working models, that adapt to individual life phases Health offers Individual development opportunities through our own Learning Academy as well as free access to LinkedIn Learning + two individual benefits
Posted 3 days ago
3.0 - 5.0 years
6 - 7 Lacs
Pune
Work from Office
About the position: We are seeking a highly organized, detail-oriented, and analytical Product Data Management Executive to join our team. This role is critical to ensuring the accuracy, completeness, and consistency of our product information across all online platforms. The ideal candidate will be responsible for managing the entire lifecycle of product data, from initial upload to ongoing maintenance and optimization, directly impacting customer experience and sales performance that center around the KBNA PIM Ops Team. Primarily, this position would be supporting the Kitchen and Bath business Key Responsibilities: Product Data Entry & Management: Accurately create, update, and maintain product listings, including descriptions, specifications, pricing, inventory levels, imagery, videos, and other digital assets on our e-commerce website(s) and PIM (Product Information Management) system. Ensure all product data adheres to internal style guides, SEO best practices, and channel-specific requirements. Perform bulk uploads and updates of product data as needed, ensuring data integrity and consistency. Data Quality & Governance: Conduct regular audits of product data to identify and rectify errors, inconsistencies, duplicates, and missing information. Implement and enforce data governance policies and procedures to ensure data accuracy, completeness, and standardization across all systems. Work to standardize product attributes, categories, and naming conventions for improved searchability and user experience. Content Enrichment & Optimization: Collaborate with marketing, merchandising, and content teams to enrich product content, ensuring compelling and informative product descriptions, high-quality images, and relevant keywords. Optimize product titles, descriptions, and metadata for search engines (SEO) to improve organic visibility. Cross-functional Collaboration: Liaise with procurement/supply chain to obtain new product information, pricing, and inventory updates. Work closely with the sales and customer service teams to address product data queries and feedback. Collaborate with IT and development teams for system integrations, data migration, and troubleshooting data-related issues. Reporting & Analysis: Monitor product data performance metrics and identify areas for improvement. Assist in generating reports related to product data quality, completeness, and impact on sales. Process Improvement: Identify and propose improvements to existing product data management processes and workflows to enhance efficiency and accuracy. Stay updated with industry best practices and emerging tools in product information management. End Goal : Deliver Brand Experience across all KBI/KBNA websites. Consistency across the digital assets for all SKU s across the globe.
Posted 3 days ago
8.0 - 12.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Who are we Founded in 2014 by Khadim Batti and Vara Kumar, Whatfix is a leading global B2B SaaS provider and the largest pure-play enterprise digital adoption platform (DAP). Whatfix empowers companies to maximize the ROI of their digital investments across the application lifecycle, from ideation to training to the deployment of software. Driving user productivity, ensuring process compliance, and improving user experience of internal and customer-facing applications. Spearheading the category with serial innovation and unmatched customer-centricity, Whatfix is the only DAP innovating beyond the category, positioning itself as a comprehensive suite for GenAI-powered digital adoption, analytics, and application simulation. Whatfix product suite consists of 3 products - DAP, Product Analytics, and Mirror. This product suite helps businesses accelerate ROI on digital investments by streamlining application deployment across its lifecycle. Whatfix has seven offices across the US, India, UK, Germany, Singapore, and Australia and a presence across 40+ countries. Customers: 700+ enterprise customers, including over 80 Fortune 500 companies such as Shell, Microsoft, Schneider Electric, and UPS Supply Chain Solutions. Investors: Raised a total of ~$270 million. Most recently Series E round of $125 Million led by Warburg Pincus with participation from existing investor SoftBank Vision Fund 2. Other investors include Cisco Investments, Eight Roads Ventures (A division of Fidelity Investments), Dragoneer Investments, Peak XV Partners, and Stellaris Venture Partners. With over 45% YoY sustainable annual recurring revenue (ARR) growth, Whatfix is among the Top 50 Indian Software Companies as per G2 Best Software Awards. Recognized as a Leader in the digital adoption platforms (DAP) category for the past 4+ years by leading analyst firms like Gartner, Forrester, IDC, and Everest Group. The only vendor recognized as a Customers Choice in the 2024 Gartner Voice of the Customer for Digital Adoption Platforms has once again earned the Customers Choice distinction in 2025. We also boast a star rating of 4.6 on G2 Crowd, 4.5 on Gartner Peer Insights, and a high CSAT of 99.8% Highest-Ranking DAP on 2023 Deloitte Technology Fast 500 North America for Fourth Consecutive Year Won the Silver for Stevies Employer of the Year 2023 Computer Software category and also recognized as Great Place to Work 2022-2023 Only DAP to be among the top 35% companies worldwide in sustainability excellence with EcoVadis Bronze Medal On the G2 peer review platform, Whatfix has received 77 Leader badges across all market segments, including Small, Medium, and Enterprise, in 2024, among numerous other industry recognitions. Key Responsibilities Design, develop, and own robust batch & streaming data pipelines (ETL/ELT) in our cloud-native stack. Model and curate data in our warehouse/lakehouse to enable self-service analytics, ML features, and dashboarding. Implement statistical analyses (e.g., regression, hypothesis testing) that convert raw data into trusted KPIs and actionable recommendations. Partner with Product, Growth, and Finance teams to frame business questions, quantify impact, and iterate on data-driven experiments. Write clean, tested Python/SQL; automate CI/CD and observability for data workflows; uphold data-quality SLAs. Architect next-gen lake-house patterns and drive platform evolution. Establish coding standards, peer reviews, and data-governance best practices. Coach and upskill junior engineers; lead technical design and analytics review sessions. Champion a culture of insight-to-action, ensuring that analytics projects are tied to revenue, retention, and efficiency metrics. Minimum Qualifications Experience: 4-7 years building production data pipelines and models in the cloud. Languages & Frameworks: Strong SQL and Python plus hands-on experience with a big-data processing framework (Spark, Flink, Beam, etc.) is an advantage Cloud Services: Practical expertise on AWS, GCP, or Azure managed data services (e.g., S3/GCS, Kinesis/PubSub, EMR/Dataproc). Analytics & Statistics: Working knowledge of statistical analysis, A/B testing, and regression modelling to translate data into business outcomes. Data Warehousing & Modeling: Familiarity with dimensional modeling, dbt (or similar), and performance-tuning for analytics workloads. Software Engineering Fundamentals: Git, CI/CD, automated testing, and observability practices. Business Impact Mindset: Demonstrated examples of turning data into measurable product or revenue impact; senior candidates should show alignment of data roadmaps with OKRs and executive priorities. Nice-to-Have Skills Platforms: Snowflake (primary warehouse/lake), Looker/LookML, PySpark. Certifications: AWS Data Analytics Specialty, Google Professional Data Engineer, or equivalent. Experience with data governance, privacy & security frameworks (GDPR, SOC 2). Exposure to ML feature stores, streaming analytics, or data observability stacks. What You ll Get Opportunity to shape the data backbone of a high-growth SaaS company used by Fortune 500 clients. A culture of ownership, continuous learning, and transparent communication. Note: We strive to live and breathe our Cultural Principles and encourage employees to demonstrate some of these core values - Customer First; Empathy; Transparency; Fail Fast & Scale Fast; No Hierarchies for Communication; Deep Dive & Innovate; Trust, Do it as you own it; We are an equal opportunity employer and value diverse people because of and not in spite of the differences. We do not discriminate on the basis of race, religion, color, national origin, ethnicity, gender, sexual orientation, age, marital status, veteran status, or disability status
Posted 3 days ago
5.0 - 10.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About the team: In this role, you will join the C&D Infrastructure team, supporting initiatives to strengthen our data analysis capabilities and enhance the performance of our internal CRM digital application and related processes. You will work closely with the market units and technology teams to coordinate efforts, ensuring a consistent focus on customer and broker needs. A key part of the role involves improving application features and maintaining high data quality within the system. Job Requirements: Support end-to-end feature testing and implementation within the application, including data management tasks. Identify business challenges by analyzing market units, gathering insights, and assessing effort versus business value. Analyze production and sales pipelines, assist with broker and client analysis, and deliver actionable insights. Oversee the data quality framework and collaborate with technology teams to ensure consistent definitions for sales-related data within the application. Identify opportunities to improve and standardize reporting and analytical processes. Design and implement internal process enhancements, such as automating manual testing, optimizing delivery workflows, and improving infrastructure scalability. Enhance application functionality and user experience through data-driven root cause analysis and UI improvements. Prepare and document pipeline reports or analyses aligned with sales strategies. Communicate solutions to business stakeholders and incorporate feedback for continuous improvement. Experience in conducting client satisfaction surveys (e.g., Net Promoter Score - NPS). Experience in managing stakeholders. About you: Bachelor s degree (preferably in Economics, Statistics, Applied Mathematics, Physics, Computer Science, Engineering, or a related field), with 5-10 years of relevant experience. Proven expertise in predictive analytics techniques such as Logistic Regression, Linear Regression etc. Hands-on experience with Python/PySpark and R. Strong proficiency in Microsoft Word, Excel, and PowerPoint; experienced with relational databases (SQL) and BI tools such as Power BI, Palantir. Self-motivated, well-organized, and capable of managing multiple priorities while meeting tight deadlines. Skilled in communicating insights effectively through data visualization and presentations. Experience working with diverse countries and cultures is an advantage. Prior experience in the Commercial Insurance industry is a plus. About Swiss Re Swiss Re is one of the world s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords: Reference Code: 134358
Posted 3 days ago
2.0 - 4.0 years
11 - 15 Lacs
Bengaluru
Work from Office
OPENTEXT - THE INFORMATION COMPANY OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do powering innovation, transforming work, and empowering digital knowledge workers. Were hiring talent that AI cant replace to help us shape the future of information management. Join us. Your Impact: As a Python Developer in the Debricked data science team, you will work on enhancing data intake processes and optimizing data pipelines. You will apply many different approaches, depending on the needs of the product and the challenges you encounter. In some cases, we use AI/LLM techniques, and we expect the number of such cases to increase. Your contributions will directly impact Debricked s scope and quality and will help ensure future commercial growth of the product. What the role offers: As a Python Developer, you will: Innovative Data Solutions: Develop and optimize data pipelines that improve the efficiency, accuracy, and automation of the Debricked SCA tool s data intake processes. Collaborative Environment: Work closely with engineers and product managers from Sweden and India to create impactful, data-driven solutions. Continuous Improvement: Play an essential role in maintaining and improving the data quality that powers Debricked s analysis, improving the product s competitiveness. Skill Development: Collaborate across teams and leverage OpenText s resources (including an educational budget) to develop your expertise in software engineering,data science and AI, expanding your skill set in both traditional and cutting-edge technologies. What you need to Succeed: 2-4 years of experience in Python development, with a focus on optimizing data processes and improving data quality. Proficiency in Python and related tools and libraries like Jupyter, Pandas and Numpy. A degree in Computer Science or a related discipline. An interest in application security. Asset to have skills in Go, Java, LLMs (specifically Gemini), GCP, Kubernetes, MySQL, Elastic, Neo4J. A strong understanding of how to manage and improve data quality in automated systems and pipelines. Ability to address complex data challenges and develop solutions to optimize systems. Comfortable working in a distributed team, collaborating across different time zones. One last thing: OpenText is more than just a corporation, its a global community where trust is foundational, the bar is raised, and outcomes are owned.Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we dont just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenTexts efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com . Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenTexts vibrant workplace. "
Posted 3 days ago
5.0 - 10.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Responsibilities : - Design, develop, and maintain scalable and efficient ETL/ELT pipelines using appropriate tools and technologies. - Develop and optimize complex SQL queries for data extraction, transformation, and loading. - Implement data quality checks and validation processes to ensure data integrity. - Automate data pipelines and workflows for efficient data processing. - Integrate data from diverse sources, including databases, APIs, and flat files. - Manage and maintain data warehouses and data lakes. - Implement data modeling and schema design. - Ensure data security and compliance with relevant regulations. - Provide data support for BI and reporting tools (Microstrategy, PowerBI, Tableau, Jaspersoft, etc.). - Collaborate with BI developers to ensure data availability and accuracy. - Optimize data queries and performance for reporting applications. - Provide technical guidance and mentorship to junior data engineers. - Lead code reviews and ensure adherence to coding standards and best practices. - Contribute to the development of technical documentation and knowledge sharing. - Design and implement data solutions on cloud platforms (AWS preferred). - Utilize AWS data integration technologies such as Airflow and Glue. - Manage and optimize cloud-based data infrastructure. - Develop data processing applications using Python, Java, or Scala. - Implement data transformations and algorithms using programming languages. - Identify and resolve complex data-related issues. - Proactively seek opportunities to improve data processes and technologies. -Stay up-to-date with the latest data engineering trends and technologies. Requirements : Experience : - 5 to 10 years of experience in Business Intelligence and Data Engineering. - Proven experience in designing and implementing ETL/ELT processes. - Expert-level proficiency in SQL (advanced/complex queries). - Strong understanding of ETL concepts and experience with ETL/ Data Integration tools (Informatica, ODI, Pentaho, etc.). - Familiarity with one or more reporting tools (Microstrategy, PowerBI, Tableau, Jaspersoft, etc.). - Knowledge of Python and cloud infrastructure (AWS preferred). - Experience with AWS data integration technologies (Airflow, Glue). - Programming experience in Java or Scala. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Proven ability to take initiative and be innovative. - Ability to work independently and as part of a team. Education : - B.Tech / M.Tech / MCA (Must-Have).
Posted 3 days ago
2.0 - 5.0 years
18 - 21 Lacs
Hyderabad
Work from Office
Overview Annalect is currently seeking a data engineer to join our technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design, and development of software products as well as research and evaluation of new technical solutions. Responsibilities Design, build, test and deploy scalable and reusable systems that handle large amounts of data. Collaborate with product owners and data scientists to build new data products. Ensure data quality and reliability Qualifications Experience designing and managing data flows. Experience designing systems and APIs to integrate data into applications. 4+ years of Linux, Bash, Python, and SQL experience 2+ years using Spark and other frameworks to process large volumes of data. 2+ years using Parquet, ORC, or other columnar file formats. 2+ years using AWS cloud services, esp. services that are used for data processing e.g. Glue, Dataflow, Data Factory, EMR, Dataproc, HDInsights , Athena, Redshift, BigQuery etc. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges
Posted 3 days ago
5.0 - 10.0 years
11 - 16 Lacs
Kolkata
Work from Office
Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms.
Posted 3 days ago
5.0 - 7.0 years
10 - 14 Lacs
Kolkata
Work from Office
Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 3 days ago
7.0 - 10.0 years
5 - 8 Lacs
Chennai
Work from Office
Employment Type : Contract (Remote). Job Summary : We are looking for a highly skilled Data Engineer / Data Modeler with strong experience in Snowflake, DBT, and GCP to support our data infrastructure and modeling initiatives. The ideal candidate should possess excellent SQL skills, hands-on experience with Erwin Data Modeler, and a strong background in modern data architectures and data modeling techniques. Key Responsibilities : - Design and implement scalable data models using Snowflake and Erwin Data Modeler. - Create, maintain, and enhance data pipelines using DBT and GCP (BigQuery, Cloud Storage, Dataflow). - Perform reverse engineering on existing systems (e.g., Sailfish/DDMS) using DBeaver or similar tools to understand and rebuild data models. - Develop efficient SQL queries and stored procedures for data transformation, quality, and validation. - Collaborate with business analysts and stakeholders to gather data requirements and convert them into physical and logical models. - Ensure performance tuning, security, and optimization of the Snowflake data warehouse. - Document metadata, data lineage, and business logic behind data structures and flows. - Participate in code reviews, enforce coding standards, and provide best practices for data modeling and governance. Must-Have Skills : - Snowflake architecture, schema design, and data warehouse experience. - DBT (Data Build Tool) for data transformation and pipeline development. - Strong expertise in SQL (query optimization, complex joins, window functions, etc.) - Hands-on experience with Erwin Data Modeler (logical and physical modeling). - Experience with GCP (BigQuery, Cloud Composer, Cloud Storage). - Experience in reverse engineering legacy systems like Sailfish or DDMS using DBeaver. Good To Have : - Experience with CI/CD tools and DevOps for data environments. - Familiarity with data governance, security, and privacy practices. - Exposure to Agile methodologies and working in distributed teams. - Knowledge of Python for data engineering tasks and orchestration scripts. Soft Skills : - Excellent problem-solving and analytical skills. - Strong communication and stakeholder management. - Self-driven with the ability to work independently in a remote setup.
Posted 3 days ago
5.0 - 10.0 years
11 - 16 Lacs
Mumbai
Work from Office
Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms
Posted 3 days ago
5.0 - 7.0 years
10 - 14 Lacs
Chennai
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 3 days ago
5.0 - 10.0 years
11 - 16 Lacs
Bengaluru
Work from Office
Data Scientist Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms
Posted 3 days ago
8.0 - 10.0 years
3 - 6 Lacs
Chennai
Work from Office
About the Role: Senior Business Intelligence Analyst The BusinessIntelligence Analyst is responsible for collecting and analyzing data frommultiple sources systems, to help organization make better business decisions.This role is crucial in maintaining data quality, compliance, and accessibilitywhile driving data-driven decision-making and reporting for Mind sprintclients. The role requires a combination of OLAM business domain expertise,problem-solving skills, and business acumen. Create,review, validate and manage data as it collected. The person will act ascustodian of data getting generated. Developpolicies and procedures for the collection and analysis of data. Possessanalytical skills to analyze data to derive meaningful insights. Skill togenerate predictive and insightful reports. Build dailyreports and schedule internal weekly and monthly meetings, preparing in advanceto share relevant and beneficial information. Data Ownership: Assume ownership of specific datasets, data dictionaries, metadata, masterdata and ensure data accuracy, completeness, and relevance. Data Integration: Collaborate with system owners,data engineers, domain experts and integration teams to facilitate the smooth integrationof financial data from multiple systems/entities into the financialtransactional and analytical datamarts. Data Quality Assurance: Establish and enforce dataquality standards and policies within the financial domain. Collaborate withdata engineers, analytics, data stewards and data custodians to monitor andimprove data quality. Data Access Control: Control and manage access todata, ensuring appropriate permissions and security measures are in place.Monitor and audit data access to prevent unauthorized use Data Reporting and Analysis: Collaborate withfinance teams to generate accurate and timely financial reports. Perform dataanalysis to identify trends, anomalies, and insights in financial data,supporting financial modelling, forecasting, and predictive decision-making. Collaborate with co-workers andmanagement to implement improvements. Job Qualifications: Masters/bachelors in financeand accounting or related fields. An advanced degree is a plus. Proven experience in financialdata management, data governance, and data analysis. Demonstrated ability to approachcomplex problems with analytical and critical thinking skills. Excellent written and verbalcommunication skills Leadership skills and theability to collaborate effectively with cross-functional teams. Ability to influence andinteract with senior management. Preferred Qualifications & Skills Knowledge in Big Data, DataLake, Azure Data Factory (ADF), Snowflake, DataBricks Synapse, MonteCarlo,Atlin and DevOpS tools like DBT. Agile Project Management Skillswith knowledge of JIRA & Confluence Good understanding of financialconcepts like Balance Sheet, P&L, TB, Direct Costs Management, Fair value,Book Value, Production/Standard costs, Stock Valuations, Ratios, andSustainability Finance. Experience in working with ERPdata especially SAP FI and SAP CO. Strategic mindset and theability to identify opportunities to use data to drive business growth. Youshould be able to think creatively and identify innovative solutions to complexproblems. Required abilities Physical: Other: Work Environment Details: Specific requirements Travel: Vehicle: Work Permit: Other details Pay Rate: Contract Types: Time Constraints: Compliance Related: Union Affiliation:
Posted 3 days ago
3.0 - 6.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Duration : 6 Months Notice Period : within 15 days or immediate joiner Experience : 3- 6 Years About The Role : As a Data Engineer for the Data Science team, you will play a pivotal role in enriching and maintaining the organization's central repository of datasets. This repository serves as the backbone for advanced data analytics and machine learning applications, enabling actionable insights from financial and market data. You will work closely with cross-functional teams to design and implement robust ETL pipelines that automate data updates and ensure accessibility across the organization. This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth. Note : Must be ready for face-to-face interview in Bangalore (last round). Should be working with Azure as cloud technology. Key Responsibilities : ETL Development : - Design, develop, and maintain efficient ETL processes for handling multi-scale datasets. - Implement and optimize data transformation and validation processes to ensure data accuracy and consistency. - Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows. Data Pipeline Architecture : - Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow. - Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines. - Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis. Data Modeling : - Design and implement data models to support analytics and reporting needs across teams. - Optimize database structures to enhance performance and scalability. Data Quality And Governance : - Develop and implement data quality checks and governance processes to ensure data integrity. - Collaborate with stakeholders to define and enforce data quality standards across the and Communication - Maintain detailed documentation of ETL processes, data models, and other key workflows. - Effectively communicate complex technical concepts to non-technical stakeholders and business Collaboration - Work closely with the Quant team and developers to design and optimize data pipelines. - Collaborate with external stakeholders to understand business requirements and translate them into technical solutions. Essential Requirements Basic Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - Familiarity with big data technologies like Hadoop, Spark, and Kafka. - Experience with data modeling tools and techniques. - Excellent problem-solving, analytical, and communication skills. - Proven experience as a Data Engineer with expertise in ETL techniques (minimum years). - 3-6 years of strong programming experience in languages such as Python, Java, or Scala - Hands-on experience in web scraping to extract and transform data from publicly available web sources. - Proficiency with cloud-based data platforms such as AWS, Azure, or GCP. - Strong knowledge of SQL and experience with relational and non-relational databases. - Deep understanding of data warehousing concepts and Qualifications : - Master's degree in Computer Science or Data Science. - Knowledge of data streaming and real-time processing frameworks. - Familiarity with data governance and security best practices.
Posted 3 days ago
6.0 - 7.0 years
2 - 6 Lacs
Chennai
Hybrid
Role : ETL Test Automation Engineer to design, implement, and maintain test automation frameworks for real-time streaming ETL pipelines in a cloud based environment. Required Experience : 6-7 Year Mandatory in ETL Automation Testing Additional Information : Start Date : Immediate-Immediate Joiners Only Mandatory Background Verification : Mandatory through a third-party verification process. Required Experience : 6-7 Yr Mandatory in ETL Automation Testing Please Note : We are looking for Immediate Joiner Only Skills : - Strong knowledge of ETL Testing with a focus on streaming data pipelines (Apache Flink, Kafka). - Experience with database testing and validation (PostgreSQL, NoSQL). - Proficiency in Java for writing test automation scripts. - Hands-on experience with automation scripting for ETL workflows, data validation, and transformation checks. - Experience in performance testing and optimizing ETL jobs to handle large-scale data streams. - Proficient in writing and executing complex database queries for data validation and reconciliation. - Experience with CI/CD tools (Jenkins, GitHub Actions, Azure DevOps) for integrating test automation. - Familiarity with cloud-based ETL testing in Azure. - Ability to troubleshoot data flow and transformation issues in real-time. Nice to Have : - Experience with Karate DSL, NiFi and CLI-based testing tools. - Experience in developing self-healing mechanisms for data integrity issues. - Hands-on experience with automated data drift detection. - Knowledge of data observability tools. - Familiarity with containerization (Docker, Kubernetes) and Infrastructure as Code (Terraform, Ansible) for ETL deployment. - Experience with testing message queues (Kafka, Pulsar, RabbitMQ, etc.). - Domain knowledge of banking and financial institutions and/or large enterprise IT environment will be considered a strong asset Next Steps : If you're excited about this opportunity and meet the requirements, we'd love to hear from you! To apply, please reply with the following details along with your updated resume : 1. Total experience 2. Current salary 3. Expected salary 4. Notice period- How Soon you can join after selection 5 Are you Immediate Joiner?-Yes/No 6. Current location 7. Are you available for the Hybrid work setup mentioned in CHENNAI? (Yes/No) 8. Do you have a minimum of 6-7 years of hands-on experience in ETL Automation Testing? (Yes/No) 9 .How much experience do you have in ETL Automation Testing? 10. Are you open to work from in Day Shift (IST) at the mentioned Job location? (Yes/No) 11. Are you Open to do Work From Office from CHENNAI Location?-Yes/No
Posted 3 days ago
5.0 - 10.0 years
11 - 16 Lacs
Chennai
Work from Office
Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms
Posted 3 days ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
The Platform Data Engineer will be responsible for designing and implementing robust data platform architectures, integrating diverse data technologies, and ensuring scalability, reliability, performance, and security across the platform. The role involves setting up and managing infrastructure for data pipelines, storage, and processing, developing internal tools to enhance platform usability, implementing monitoring and observability, collaborating with software engineering teams for seamless integration, and driving capacity planning and cost optimization initiatives.
Posted 4 days ago
3.0 - 8.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Unified Communication and Collaboration Operations Good to have skills : Unified Communication and Collaboration ImplementationMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Tech Support Practitioner, you will act as the ongoing interface between the client and the Unified Communication and Collaboration Operations system. You will use exceptional communication skills to keep our world-class systems running smoothly. With deep product knowledge, you will accurately define client issues and design effective resolutions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Ensure smooth operation of the Unified Communication and Collaboration Operations system.- Resolve client issues by accurately defining and interpreting them.- Design effective resolutions based on deep product knowledge.- Collaborate with the team to provide solutions to work-related problems.- Continuously improve the Unified Communication and Collaboration Operations system. Professional & Technical Skills: - Must To Have Skills: Proficiency in Unified Communication and Collaboration Operations.- Good To Have Skills: Experience with Unified Communication and Collaboration Implementation.- Strong understanding of Unified Communication and Collaboration Operations.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on experience implementing various Unified Communication and Collaboration Operations.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 3 years of experience in Unified Communication and Collaboration Operations.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 4 days ago
5.0 - 10.0 years
10 - 14 Lacs
Mumbai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Spring Boot, Japanese Language Good to have skills : Spring Application FrameworkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the effort to design, build, and configure applications- Act as the primary point of contact- Manage the team and ensure successful project delivery Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot, Japanese- Good To Have Skills: Experience with Spring Application Framework- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have a minimum of 5 years of experience in Spring Boot- This position is based in Mumbai with location flex for cross location resource.- A 15 years full-time education is required Qualification 15 years full time education
Posted 4 days ago
7.0 - 12.0 years
10 - 14 Lacs
Mumbai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Spring Boot, Japanese Language Good to have skills : Spring Application FrameworkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the effort to design, build, and configure applications- Act as the primary point of contact- Manage the team and ensure successful project delivery Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot, Japanese- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have a minimum of 7.5 years of experience in Spring Boot- This position is based in Mumbai with location flex for cross location resource.- A 15 years full-time education is required Qualification 15 years full time education
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane