Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Exciting Opportunity at Eloelo: Join the Future of Live Streaming and Social Gaming! Are you ready to be a part of the dynamic world of live streaming and social gaming? Look no further! Eloelo, an innovative Indian platform founded in February 2020 by ex-Flipkart executives Akshay Dubey and Saurabh Pandey, is on the lookout for passionate individuals to join our growing team in Bangalore. About Us: Eloelo stands at the forefront of multi-host video and audio rooms, offering a unique blend of interactive experiences, including chat rooms, PK challenges, audio rooms, and captivating live games like Lucky 7, Tambola, Tol Mol Ke Bol, and Chidiya Udd. Our platform has successfully attracted audiences from all corners of India, providing a space for social connections and immersive gaming. Recent Milestone: In pursuit of excellence, Eloelo has secured a significant milestone by raising $22Mn in the month of October 2023 from a diverse group of investors, including Lumikai, Waterbridge Capital, Courtside Ventures, Griffin Gaming Partners, and other esteemed new and existing contributors. Why Eloelo? Be a part of a team that thrives on creativity and innovation in the live streaming and social gaming space. Rub shoulders with the stars! Eloelo regularly hosts celebrities such as Akash Chopra, Kartik Aryan, Rahul Dua, Urfi Javed, and Kiku Sharda from the Kapil Sharma Show and that's our level of celebrity collaboration. Working with a world class team ,high performance team that constantly pushes boundaries and limits , redefines what is possible Fun and work at the same place with amazing work culture , flexible timings , and vibrant atmosphere We are looking to hire a business analyst to join our growth analytics team. This role sits at the intersection of business strategy, marketing performance, creative experimentation, and customer lifecycle management, with a growing focus on AI-led insights. You’ll drive actionable insights to guide our performance marketing, creative strategy, and lifecycle interventions, while also building scalable analytics foundations for a fast-moving growth team. We’re looking for 1 to 3 years of experience in business/marketing analytics or growth-focused analytics roles Strong grasp of marketing funnel metrics, CAC, ROAS, LTV, retention, and other growth KPIs SQL Mastery: 1+ years of experience writing and optimizing complex SQL queries over large datasets (BigQuery/Redshift/Snowflake) Experience in campaign performance analytics across Meta, Google, Affiliates etc. Comfort working with creative performance data (e.g., A/B testing, video/image-led analysis) Experience with CLM campaign analysis via tools like MoEngage, Firebase. Ability to work with large datasets, break down complex problems, and derive actionable insights Hands-on experience or strong interest in applying AI/ML for automation, personalization, or insight generation is a plus Good business judgment and a strong communication style that bridges data and decision-making Comfort juggling short-term tactical asks and long-term strategic workstreams Experience in a fast-paced consumer tech or startup environment preferred You will Own reporting, insights, and experimentation across performance marketing, creative testing, and CLM Partner with growth, product, and content teams to inform campaign decisions, budget allocation, and targeting strategy Build scalable dashboards and measurement frameworks for marketing and business KPIs Drive insights into user behavior and campaign effectiveness by leveraging cohorting, segmentation, and funnel analytics Evaluate and experiment with AI tools or models to automate insights, build scoring systems, or improve targeting/personalization Be the go-to person for identifying growth levers, inefficiencies, or new opportunities across user acquisition and retention Bonus Points Experience working with marketing attribution tools (Appsflyer, Adjust etc.) Hands-on experience with Python/R for advanced analysis or automation Exposure to AI tools for marketing analytics (e.g., creative scoring, automated clustering, LLMs for insights) Past experience working in analytics for a D2C, gaming, or consumer internet company You’ve built marketing mix models or predictive LTV models
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Exela Exela Technologies, Inc. (“Exela”) is a location agnostic, global business process automation ("BPA") leader combining industry-specific and industry-agnostic enterprise software and solutions with decades of experience. Our BPA suite of solutions are deployed in banking, healthcare, insurance and other industries to support mission critical environments. Exela is a leader in workflow automation, attended and unattended cognitive automation, digital mail rooms, print communications, and payment processing with deployments across the globe. Exela partners with customers to improve user experience and quality through operational efficiency. Exela serves over 3,700 customers across more than 50 countries, through a secure, cloud-enabled global delivery model. We are 22,000 employees strong across the Americas, Europe and Asia. Our customer list includes 60% of the Fortune® 100, along with many of the world’s largest retail chains, banks, law firms, healthcare insurance payers and providers and telecom companies. Why Exela? A global, public company (Nasdaq: XELA), the people behind Exela are as important as the company itself. Our The team's extensive experience across multiple industry verticals gives us a better sense of our clients' needs. That begins with teams comprised of individuals from diverse backgrounds with different perspectives. Join our global team as we create advancements in business process automation solutions that impact our client’s mission-critical operations across the industries they serve. The diversity of our workforce and their inspiring ideas resonate throughout all that we do – don’t just read about digital innovation, be part of the revolution! We are seeking a highly skilled and experienced Senior Lead Engineer to join our dynamic team. In this role, you will play a crucial part in designing, implementing, and supporting F5 Networks' solutions for our internal / external clients. Designation: Data Engineer Experience: 5+ years. Desired Skills and Experience: Technical Expertise Proficiency in Python and Java for data engineering tasks. Hands-on experience Apache Airflow Hands-on experience with data processing libraries and frameworks (e.g., Pandas, Apache Spark). Strong understanding of SQL and database systems (PostgreSQL, MySQL, MongoDB, etc.). Experience with cloud data platforms like AWS (S3, Redshift), GCP (BigQuery), or Azure Data Lake. Problem-Solving and Analytical Thinking Ability to troubleshoot complex data issues and provide effective solutions. Strong analytical skills to interpret data and derive actionable insights. Additional Skills Knowledge of containerization tools like Docker and orchestration tools like Kubernetes is a plus. Familiarity with CI/CD pipelines and version control systems like Git. Understanding of data governance, security, and compliance requirements.
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335382
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Specialist- Data Visualization Our Human Health Digital, Data and Analytics (HHDDA) team is innovating how we understand our patients and their needs. Working cross functionally we are inventing new ways of communicating, measuring, and interacting with our customers and patients leveraging digital, data and analytics. Are you passionate about helping people see and understand data? You will take part in an exciting journey to help transform our organization to be the premier data-driven company. As a Specialist in Data Visualization, you will be focused on designing and developing compelling data visualizations solutions to enable actionable insights & facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals & dashboards that empower stakeholders with data driven insights & decision-making capability. Responsibilities Develop user-centric scalable visualization and analytical solutions, leveraging complex data sources to create intuitive and insightful dashboards. Apply best practices in visualization design to enhance user experience and business impact. Drive business engagement, collaborating with stakeholders to define key metrics, KPIs, and reporting needs. Facilitate workshops to develop user stories, wireframes, and interactive visualizations. Partner with data engineering, data science, and IT teams to develop scalable business-friendly reporting solutions. Ensure adherence to data governance, privacy, and security best practices. Identify opportunities for automation, streamlining manual reporting processes through modern visualization technologies and self-service analytics enablement. Provide thought leadership, driving knowledge-sharing within the Data & Analytics organization while staying ahead of industry trends to enhance visualization capabilities. Continuously innovative on visualization best practices & technologies by reviewing external resources & marketplace Ensuring timely delivery of high-quality outputs, while creating and maintaining SOPs, KPI libraries, and other essential governance documents. Required Experience And Skills 5+ years of experience in business intelligence, insight generation, business analytics, data visualization, infographics, and interactive visual storytelling Hands-on expertise in BI and visualization tools such as Power BI, MicroStrategy, and ThoughtSpot Solid understanding of data engineering and modeling, including ETL workflows, Dataiku, Databricks, Informatica, and database technologies like Redshift and Snowflake, with programming skills in SQL and Python. Deep knowledge of pharma commercial data sources, including IQVIA, APLD, Claims, Payer, Salesforce, Financials, Veeva, Komodo, IPSOS, and other industry datasets to drive strategic insights. Experience in pharmaceutical commercial analytics, including Field Force Effectiveness, customer engagement, market performance assessment, as well as web, campaign, and digital engagement analytics. Strong problem-solving, communication, and project management skills, with the ability to translate complex data into actionable insights and navigate complex matrix environments efficiently. Strong product management mindset, ensuring analytical solutions are scalable, user-centric, and aligned with business needs, with experience in defining product roadmaps and managing solution lifecycles. Expertise in agile ways of working, including Agile/Scrum methodologies, iterative development, and continuous improvement in data visualization and analytics solutions. Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Visualization, Requirements Management, User Experience (UX) Design Preferred Skills Job Posting End Date 04/30/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335129
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
Title – Product Manager Experience: 4 to 8 years Skills required: Business Analysis, Managed products developed using AWS services like S3, API Gateway, Lambda, DynamoDB, Redshift, Stakeholder Management, Backlog Prioritization Job Description The Area: Data Lake is a smart object store on AWS that allows storage and access. The files in Data Lake are stored in raw or unstructured format, as compared to a structured DB, and are accessible to run a variety of analytics as needed on the available data. As a roadmap, Data Lake would be used across Morningstar to store and access all their structured and unstructured data across the teams to make it a single source of information. The Role: We are looking for an experienced, enthusiastic, results-driven individual to help advance our offerings to Morningstar's internal users. The ideal candidate will deeply understand the financial markets and financial data. The candidate should have worked extensively on developing new products and digital propositions from concept through to launch. This business visionary will work with internal partners in Product, Research, and Investment Management to drive innovation in our product offerings. This position is based in our Mumbai office. Responsibilities: Work within an Agile software development framework, develop business requirements and user stories refined and validated with customers and stakeholders, prioritize the backlog queue across multiple projects and workstreams and ensure high-quality execution working with development and business analyst squad members. Work with external and internal project stakeholders to define and document project scope, plan product phases/ versions, Minimum Viable Product, and overall product deliveries Work with other product and capability owners from across the organization to develop a product integration vision that supports and advances their business goals Work with cross-functional leaders to determine technology, design, and project management resource requirements to execute and deliver on commitments. Proactively communicate project delivery risks to key stakeholders to ensure timely deliverables Own the tactical roadmap, requirements, and product development lifecycle for a squad to deliver high-performing Enterprise Components to our end clients Understand business, operations, and technology requirements, serving as a conduit between stakeholders, operations, and technology teams Defines and tracks key performance indicators (KPIs) and measurements of product success. Requirements: Candidates must have a minimum of a bachelor`s degree with excellent academic credentials. MBA highly desired. At least five years of business experience in the financial services industry. Candidates must have domain expertise, particularly in developing products using the AWS platform. Superior business judgment; analytical, planning, and decision-making skills; in addition, to exemplary communication and presentation abilities. An action-oriented individual possessing an entrepreneurial mindset. Demonstrated ability to lead and build the capabilities of a driven and diverse team. Able to thrive in a fast-paced work environment, exhibit a passion for innovation, and harbor a genuine belief in, and acceptance of Morningstar`s core values. Ability to develop strong internal and external partnerships; and work effectively across different business and functional areas. AWS Certification is a big plus Morningstar is an equal-opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity
Posted 3 days ago
5.0 years
0 Lacs
Salem, Tamil Nadu, India
On-site
Description / Position Overview This is a key position for our client to help create data-driven technology solutions that will position us as the industry leader in healthcare, financial, and clinical administration. This hands-on Lead Data Scientist role will focus on building and implementing machine learning models and predictive analytics solutions that will drive the new wave of AI-powered innovation in healthcare. You will be the lead data science technologist responsible for developing and implementing a multitude of ML/AI products from concept to production, helping us gain a competitive advantage in the market. Alongside our Director of Data Science, you will work at the intersection of healthcare, finance, and cutting-edge data science to solve some of the industry's most complex challenges. This is a greenfield opportunity within VHT’s Product Transformation division, where you'll build groundbreaking machine learning capabilities from the ground up. You'll have the chance to shape the future of VHT’s data science & analytics foundation while working with cutting-edge tools and methodologies in a collaborative, innovation-driven environment. Key Responsibilities As the Lead Data Scientist, your role will require you to work closely with subject matter experts in clinical and financial administration across practices, health systems, hospitals, and payors. Your machine learning projects will span the entire healthcare revenue cycle - from clinical encounters through financial transaction completion, extending into back-office operations and payer interactions. You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: • Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood • Claim Denials Prediction - identifying high-risk claims before submission • Payment Amount Prediction - forecasting expected reimbursement amounts • Cash Flow Forecasting - predicting revenue timing and patterns • Patient-Related Models - enhancing patient financial experience and outcomes • Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment • Cloud Platform : AWS (SageMaker, S3, Redshift, EC2) • Development Tools : Jupyter Notebooks, Git, Docker • Programming : Python, SQL, R (optional) • ML/AI Stack : Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow • Data Processing : Spark, Pandas, NumPy • Visualization : Matplotlib, Seaborn, Plotly, Tableau Required Qualifications • Advanced degree in Data Science, Statistics, Computer Science, Mathematics, or a related quantitative field • 5+ years of hands-on data science experience with a proven track record of deploying ML models to production • Expert-level proficiency in SQL and Python , with extensive experience using standard Python machine learning libraries (scikit-learn, pandas, numpy, matplotlib, seaborn, etc.) • Cloud platform experience, preferably AWS, with hands-on knowledge of SageMaker, S3, Redshift, and Jupyter Notebook workbenches (other cloud environments acceptable) • Strong statistical modeling and machine learning expertise across supervised and unsupervised learning techniques • Experience with model deployment, monitoring, and MLOps practices • Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications • US Healthcare industry experience , particularly in Health Insurance and/or Medical Revenue Cycle Management • Experience with healthcare data standards (HL7, FHIR, X12 EDI) • Knowledge of healthcare regulations (HIPAA, compliance requirements) • Experience with deep learning frameworks (TensorFlow, PyTorch) • Familiarity with real-time streaming data processing • Previous leadership or mentoring experience
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You should have at least 2 years of professional work experience in implementing data pipelines using Databricks and datalake. A minimum of 3 years of hands-on programming experience in Python within a cloud environment (preferably AWS) is necessary for this role. Having 2 years of professional work experience with real-time streaming systems such as Event Grid and Event topics would be highly advantageous. You must possess expert-level knowledge of SQL to write complex, highly-optimized queries for processing large volumes of data effectively. Experience in developing conceptual, logical, and/or physical database designs using tools like ErWin, Visio, or Enterprise Architect is expected. A minimum of 2 years of hands-on experience working with databases like Snowflake, Redshift, Synapse, Oracle, SQL Server, Teradata, Netezza, Hadoop, MongoDB, or Cassandra is required. Knowledge or experience in architectural best practices for building data lakes is a must for this position. Strong problem-solving and troubleshooting skills are necessary, along with the ability to make sound judgments independently. You should be capable of working independently and providing guidance to junior data engineers. If you meet the above requirements and are ready to take on this challenging role, we look forward to your application. Warm Regards, Rinka Bose Talent Acquisition Executive Nivasoft India Pvt. Ltd. Mobile: +91-9632249758 (INDIA) | 732-334-3491 (U.S.A) Email: rinka.bose@nivasoft.com | Web: https://nivasoft.com/,
Posted 3 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Zenoti provides an all-in-one, cloud-based software solution for the beauty and wellness industry. Our solution allows users to seamlessly manage every aspect of the business in a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Zenoti helps clients streamline their systems and reduce costs, while simultaneously improving customer retention and spending. Our platform is engineered for reliability and scale and harnesses the power of enterprise-level technology for businesses of all sizes Zenoti powers more than 30,000 salons, spas, medspas and fitness studios in over 50 countries. This includes a vast portfolio of global brands, such as European Wax Center, Hand & Stone, Massage Heights, Rush Hair & Beauty, Sono Bello, Profile by Sanford, Hair Cuttery, CorePower Yoga and TONI&GUY. Our recent accomplishments include surpassing a $1 billion unicorn valuation, being named Next Tech Titan by GeekWire, raising an $80 million investment from TPG, ranking as the 316th fastest-growing company in North America on Deloitte’s 2020 Technology Fast 500™. We are also proud to be recognized as a Great Place to Work CertifiedTM for 2021-2022 as this reaffirms our commitment to empowering people to feel good and find their greatness. To learn more about Zenoti visit: https://www.zenoti.com What will I be doing? Design, architect, develop and maintain components of Zenoti Collaborate with a team of product managers, developers, and quality assurance engineers to define, design and deploy new features and functionality Build software that ensures the best possible usability, performance, quality and responsiveness of features Work in a team following agile development practices (SCRUM) Learn to scale your features to handle 2x to 4x growth every year and manage code that has to deal with millions of records and terabytes of data Release new features into production every month and get real feedback from thousands of customers to refine your designs Be proud of what you work on, and obsess about the quality of your work. Join our team to do the best work of your career. What skills do I need? 6+ years' experience developing ETL solutions and data pipelines with expertise in processing trillions of records efficiently 6+ years' experience with SQL Server, T-SQL, stored procedures, and deep understanding of SQL performance tuning for large-scale data processing Strong understanding of ETL concepts, data modeling, and data warehousing principles with hands-on experience building data pipelines using Python Extensive experience with Big Data platforms including Azure Fabric, Azure Databricks, Azure Data Factory (ADF), Amazon Redshift, Apache Spark, and Delta Lake Expert-level SQL skills for complex data transformations, aggregations, and query optimization to handle trillions of records with optimal performance Hands-on experience creating data lakehouse architectures and implementing data governance and security best practices across Big Data platforms Strong logical, analytical, and problem-solving skills with ability to design and optimize distributed computing clusters for maximum throughput Excellent communication skills for cross-functional collaboration and ability to work in a fast-paced environment with changing priorities Experience with cloud-native data solutions including Azure Data Lake, Azure Synapse, and containerization technologies (Docker, Kubernetes) Proven track record of implementing CI/CD pipelines for data engineering workflows, automating data pipeline deployment, and monitoring performance at scale Benefits Attractive Compensation Comprehensive medical coverage for yourself and your immediate family An environment where wellbeing is high on priority – access to regular yoga, meditation, breathwork, nutrition counseling, stress management, inclusion of family for most benefit awareness building sessions Opportunities to be a part of a community and give back: Social activities are part of our culture; You can look forward to regular engagement, social work, community give-back initiatives Zenoti provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
rajasthan
On-site
You will be working as a Snowflake Database Administrator at the Mid Level, providing database and application administration and support for the Information Management Analytical Service. This role involves managing data integration, data warehouse, and business intelligence, including enterprise reporting, predictive analytics, data mining, and self-service solutions. You will collaborate with different teams to offer database and application administration, job scheduling/execution, and code deployment support. Your key responsibilities will include providing database support for Big Data tools, performing maintenance tasks, performance tuning, monitoring, developer support, and administrative support for the application toolset. You will participate in a 24/7 on-call rotation for enterprise job scheduler activities, follow ITIL processes, create/update technical documentation, install/upgrade/configure application toolset, and ensure regular attendance. To qualify for this role, you are required to have a Bachelor's degree or equivalent experience, along with 5 years of work experience in IT. You should have experience in Cloud Database Administration, installing/configuring commercial applications at the OS level, and effective collaboration in a team environment. Preferred skills include scripting in Linux and Windows, experience with Terraform, and knowledge of the insurance and/or reinsurance industry. In terms of technical requirements, you should be proficient in databases such as Snowflake, Vertica, Impala, PostgreSQL, Oracle, SQL Server, operating systems like Unix, Linux, CentOS, Windows, and reporting tools including SAP Business Objects, Tableau, and PowerBI. This position falls under SOW#23 - Snowflake DBA and requires a minimum of 4 years to a maximum of 5 years of experience. Thank you for considering this opportunity.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm with a workforce of over 125,000 individuals in more than 30 countries. We are characterized by our innate curiosity, entrepreneurial agility, and commitment to creating enduring value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises globally, including the Fortune Global 500. We leverage our profound business and industry knowledge, digital operations services, and expertise in data, technology, and AI to deliver impactful outcomes. We are currently seeking applications for the position of Senior Principal Consultant - QA Engineer! Responsibilities: - Develop comprehensive test plans, test cases, and test scenarios based on functional and non-functional requirements. - Manage the test case life cycle efficiently. - Execute and analyze manual and automated tests to identify defects and ensure the quality of software applications. - Collaborate closely with development teams to align test cases with development goals and timelines. - Work with cross-functional teams to ensure adequate testing coverage and effective communication of test results. Moreover, the ideal candidate should possess the ability to manage repeatable standard processes while also demonstrating proficiency in identifying and resolving ad-hoc issues. Qualifications we seek in you! Minimum Qualifications: - Proficiency in SQL, ETL Testing, and writing testing scripts in Python to validate functionality, create automation frameworks, and ensure the performance and reliability of data systems. - In-depth understanding of the data domain, encompassing data processing, storage, and retrieval. - Strong collaboration, communication, and analytical skills. - Experience in reviewing system requirements and tracking quality assurance metrics such as defect densities and open defect counts. - Experience in creating and enhancing the integration of CI/CD pipelines. - Familiarity with Agile/Scrum development processes. - Some exposure to performance and security testing. - Hands-on experience in test execution using AWS services, particularly proficient in services like MKS, EKS, Redshift, and S3. If you are passionate about quality assurance engineering and possess the required qualifications, we invite you to apply for this exciting opportunity! Job Details: - Job Title: Senior Principal Consultant - Location: India-Gurugram - Schedule: Full-time - Education Level: Bachelor's / Graduation / Equivalent - Job Posting Date: Sep 18, 2024, 4:28:53 AM - Unposting Date: Oct 18, 2024, 1:29:00 PM - Master Skills List: Digital - Job Category: Full Time,
Posted 3 days ago
9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake
Posted 3 days ago
9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python
Posted 3 days ago
0.0 - 6.0 years
15 - 18 Lacs
Indore, Madhya Pradesh
On-site
Location: Indore Experience: 6+ Years Work Type : Hybrid Notice Period : 0-30 Days joiners We are hiring for a Digital Transformation Consulting firm that specializes in the Advisory and implementation of AI, Automation, and Analytics strategies for the Healthcare providers. The company is headquartered in NJ, USA and its India office is in Indore, MP. Job Description: We are seeking a highly skilled Tech Lead with expertise in database management, data warehousing, and ETL pipelines to drive the data initiatives in the company. The ideal candidate will lead a team of developers, architects, and data engineers to design, develop, and optimize data solutions. This role requires hands-on experience in database technologies, data modeling, ETL processes, and cloud-based data platforms. Key Responsibilities: Lead the design, development, and maintenance of scalable database, data warehouse, and ETL solutions. Define best practices for data architecture, modeling, and governance. Oversee data integration, transformation, and migration strategies. Ensure high availability, performance tuning, and optimization of databases and ETL pipelines. Implement data security, compliance, and backup strategies. Required Skills & Qualifications: 6+ years of experience in database and data engineering roles. Strong expertise in SQL, NoSQL, and relational database management systems (RDBMS). Hands-on experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery). Deep understanding of ETL tools and frameworks (e.g., Apache Airflow, Talend, Informatica). Experience with cloud data platforms (AWS, Azure, GCP). Proficiency in programming/scripting languages (Python, SQL, Shell scripting). Strong problem-solving, leadership, and communication skills. Preferred Skills (Good to Have): Experience with big data technologies (Hadoop, Spark, Kafka). Knowledge of real-time data processing. Exposure to AI/ML technologies and working with ML algorithms Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹1,800,000.00 per year Schedule: Day shift Application Question(s): We must fill this position urgently. Can you start immediately? Have you held a lead role in the past? Experience: Extract, Transform, Load (ETL): 6 years (Required) Python: 5 years (Required) big data technologies (Hadoop, Spark, Kafka): 6 years (Required) Snowflake: 6 years (Required) Data warehouse: 6 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person
Posted 3 days ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Us Resolute is a forward-thinking investment firm committed to strategic diversification and sustainable growth across high-impact sectors. It’s diversified portfolio includes Hair Drama Company (a luxury fashion accessories brand specializing in Hair Accessories operating in India, UK and UAE), Sanghi Industries Limited (a prominent company jointly owned by Adani Group and the Sanghi Family), ShareSquare (Fractional Real Estate Platform based in Dubai), Resolute Sports (A sport franchise which owns Hyderabad Toofan Team in HIL and Delhi Toofan in VPL). We nurture ventures that redefine industry benchmarks and create long-lasting value. We are now seeking a visionary Analytics Head who can lead our data initiatives, drive strategic insights, and help translate complex data into powerful business decisions. Roles and Responsibilities Strategic Data Leadership : Define and lead the end-to-end data strategy, encompassing data acquisition, integration, processing, and visualization to deliver scalable analytics solutions across the organization. Business Intelligence & Insights : Identify trends and key business opportunities through deep analysis of large and complex datasets, including internal sources, third-party data, research publications, and digital channels such as social media. Collaboration with Leadership : Partner with executive leadership and key business stakeholders to translate strategic goals into analytical frameworks, KPIs, and data-driven solutions that drive decision-making. Data Architecture & Governance : Establish robust data protocols, models, and pipelines ensuring high data quality, consistency, and governance across the enterprise. Analytics Framework Development : Build, implement, and maintain leading-edge analytics frameworks for descriptive, predictive, and prescriptive insights using tools like Python, R, SQL, and cloud-based platforms (AWS, GCP, Azure). Innovation & Thought Leadership : Drive innovation in analytics, including the application of machine learning, AI, and automation to solve business challenges and create competitive advantage. Cross-functional Enablement : Enable cross-functional teams with the tools, platforms, and reports necessary to analyze and act on data effectively. Develop dashboards and interactive reports aligned with industry standards. Performance Monitoring : Design and maintain critical metrics and KPIs dashboards to track business performance, operational efficiency, and strategic alignment. Requirements Proven 7+ years of experience in analytics, data science, or business intelligence, including at least 2 years in a leadership role. Strong proficiency in SQL, Python, R, and modern data visualization tools (Tableau, Power BI, Looker, etc.). Expertise in data modelling, cloud data platforms (MS Azure, AWS Redshift, BigQuery, Snowflake), and ETL tools. Hands-on experience with machine learning models, statistical techniques, and AI-driven solutions. Strong understanding of data governance, security, and compliance standards. Excellent problem-solving and communication skills with the ability to convey complex data insights to non-technical stakeholders. Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, or a related field. Preferred Qualifications Prior experience in an investment firm, financial services, or high-growth startup environment. Familiarity with business operations across finance, product, and marketing. Experience with open-source data science libraries and deployment of ML models into production. Strong project management capabilities and familiarity with Agile methodologies.
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Role/ Position- Senior Data Engineer (Alan B) Location - Navi Mumbai ( Ghansoli) Experience- 3-5 Years Notice periods - 1 Month • Have excellent interpersonal communication, and stakeholder engagement skills. • Have a certificate or Diploma in a computer science, information systems or related field. • Have 3 to 5 years of experience in a similar environment, of which its beneficial to have 2 years in an operational support role. • Have team lead experience. • Good understanding of service and product support business processes. • Have worked with data unit testing. • Have developed databases and data warehouses. • Have worked with Control M. • Have experience in SQL languages. • Have a strong drive to pay attention to detail. • Understanding of and implementation of database performance optimization, tuning, analysis and specification. • Solution analysis and problem-solving skills. • Strong organizational skills. • Knowledgeable about all phases of the software development life cycle. • Certificates to demonstrate knowledge and competency are an advantage. • Experience and knowledge of AWS S3 and Redshift cloud computing desired. • Experience with Abinitio desired. • Have development experience in reporting solutions like PowerBI is advantageous. • Finance experience is an advantage, as would be any banking experience. You will have access to: • Other team leads and experienced individuals who collaborate effectively. • The high-profile area of operational excellence in the Core Finance Platform. • Opportunities to network and grow. • A challenging working environment. • Opportunities to be rewarded for innovation.
Posted 3 days ago
5.0 - 10.0 years
0 - 0 Lacs
pune, maharashtra
On-site
You will be responsible for architecting data warehousing and business intelligence solutions to address cross-functional business challenges. This will involve interacting with business stakeholders to gather requirements and deliver comprehensive Data Engineering, Data Warehousing, and analytics solutions. Additionally, you will collaborate with other technology teams to extract, transform, and load data from diverse sources. You should have a minimum of 5-8 years of end-to-end Data Engineering Development experience, preferably across industries such as Retail, FMCG, Manufacturing, Finance, Oil & Gas. Experience in functional domains like Sales, Procurement, Cost Control, Business Development, and Finance is desirable. You are expected to have 3 to 10 years of experience in data engineering projects using Azure or AWS services, with hands-on expertise in data transformation, processing, and migration using various tools such as Azure Data Lake Storage, Azure Data Factory, Databricks, AWS Glue, Redshift, and Athena. Familiarity with MS Fabric and its components will be advantageous, along with experience in working with different source/target systems like Oracle Database, SQL Server Database, Azure Data Lake Storage, ERP, CRM, and SCM systems. Proficiency in reading data from sources via APIs/Web Services and utilizing APIs to write data to target systems is essential. You should also have experience in Data Cleanup, Data Cleansing, and optimization tasks, including working with non-structured data sets in Azure. Knowledge of analytics tools like Power BI and Azure Analysis Service, as well as exposure to private and public cloud architectures, will be beneficial. Excellent written and verbal communication skills are crucial for this role. Ideally, you hold a degree in M.Tech / B.E. / B.Tech (Computer Science, Information systems, IT) / MCA / MCS. Key requirements include expertise in MS Azure Data Factory, Python, PySpark Coding, Synapse Analytics, Azure Function Apps, Azure Databricks, AWS Glue, Athena, Redshift, and Databricks Pysark. Exposure to integration with various applications/systems like ERP, CRM, SCM, WebApp using APIs, Cloud, On-premise systems, DBs, and file systems is expected. The role necessitates a minimum of 3 Full Cycle Data Engineering Implementations (5-10 years of experience) with a focus on building data warehouses and implementing data models. Exposure to the consulting industry is mandatory, along with strong verbal and written communication skills. Your primary skills should encompass Data Engineering Development, Cloud Engineering with Azure or AWS, Data Warehousing & BI Solutions Architecture, Programming (Python PySpark), Data Integration across various systems, Consulting experience, ETL and Data Transformation, and knowledge in Cloud Architecture. Additionally, familiarity with MS Fabric, handling non-structured data, Data Cleanup and Optimization, API/Web Services, Data Visualization, and industry and functional knowledge will be advantageous. The compensation package ranges from INR 12-28 lpa, subject to the candidate's performance and experience level.,
Posted 3 days ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As an AWS Data Engineer at Neoware Technology Solutions, a technology company based in Chennai, you will be responsible for optimizing Redshift database queries for enhanced performance, managing large table partitions, and utilizing AWS tools and services such as Glue, EMR, Athena, and StepFunction for data processing and management. Your role will involve developing and maintaining data pipelines, blending data using Python and SQL, and writing advanced code to support data engineering tasks. Additionally, you will apply your knowledge of visualization tools like Power BI and Tableau to create insightful data visualizations and collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. To excel in this role, you should have advanced knowledge of Redshift database architecture, query optimization techniques, and performance tuning. Proficiency in AWS tools and services relevant to data engineering, such as Glue, EMR, Athena, and StepFunction is essential. Strong programming skills in Python for data manipulation, analysis, and automation, as well as mastery of SQL for data querying, manipulation, and optimization are required. Experience with visualization tools like Power BI or Tableau will be an advantage. In addition to technical skills, you should possess soft skills such as problem-solving abilities to identify and resolve complex data-related issues, effective communication skills to collaborate with stakeholders and document technical processes, strong analytical skills to analyze data and extract meaningful insights, meticulous attention to detail to ensure data accuracy and consistency, and flexibility to adapt to evolving technologies and data requirements. If you have 8 to 10 years of experience and are looking for a challenging opportunity to work in Chennai/Bangalore (Work from Office - 5 days), please reach out with your resumes to hr@neoware.ai.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
The role of Support Engineer requires monitoring and maintaining integration pipelines, data flows, and jobs to ensure system uptime, performance, and stability. You will be responsible for troubleshooting issues promptly and ensuring timely resolution to minimize business impact. Additionally, you will monitor and maintain data models to ensure data accuracy and consistency. Utilizing ITIL best practices, you will efficiently manage incidents to ensure minimal disruption to operations. In Digital Transformation (DT) projects, you will triage incidents to identify and resolve issues promptly. Handling access requests, you will ensure proper authorization and security protocols are followed. For Change and Problem Management, you will raise and manage Change Requests (CRs) for any system modifications or updates. It will be essential to conduct root cause analysis for recurring issues and document Problem Tickets for long-term solutions. Adherence to ITIL processes for managing changes and resolving problems effectively is crucial. Your role will also involve Pipeline Validation and Analysis where you will apply SnapLogic knowledge to troubleshoot issues within SnapLogic pipelines, APIs, and other integration points. Collaboration with stakeholders to understand integration requirements and recommend solutions will be necessary. In terms of Service Delivery and Improvement, you will be responsible for developing, implementing, and maintaining service delivery processes in accordance with ITIL best practices. Identifying opportunities for process improvements and automation to enhance service delivery will be a continuous effort. Providing regular updates and reports on ongoing initiatives to stakeholders and PMO is also a key aspect of the role. Collaboration with team members and stakeholders to understand requirements and provide effective support solutions will be crucial. Communication with stakeholders, including senior management, business users, and other teams, to provide updates on incident status and resolution efforts is essential. Facilitating User Acceptance Testing (UAT) of projects and Change Requests will also be part of your responsibilities. Qualifications required for this role include a Bachelor's degree in computer science, Information Technology, Data Science, or a related field. A minimum of 4 years of experience in a support engineer role with 2 years relevant in SnapLogic is preferred, preferably in the pharmaceutical or related domain. Proven experience in monitoring and maintaining jobs, schedules, and data models is required. Strong hands-on experience with SnapLogic integration platform and proficiency in working with integration technologies is essential. Knowledge of common data formats, various databases, diagnostic and troubleshooting skills, as well as strong ITIL skills are also necessary. Excellent communication and collaboration skills, problem-solving abilities, and organizational skills are key attributes required for this role.,
Posted 3 days ago
4.0 - 6.0 years
0 Lacs
India
Remote
Hi, we’re TechnologyAdvice. At TechnologyAdvice, we pride ourselves on helping B2B tech buyers manage the complexity and risk of the buying process. We are a trusted source of information for tech buyers, delivering advice and facilitating connections between our buyers and the world’s leading sellers of business technology. Headquartered in Nashville, Tennessee, we are a remote-first company with more than 20 digital publications and over 500 global team members in the US, UK, Singapore, Australia, and the Philippines. We’re proud to have been repeatedly recognized as one of America’s fastest growing private companies by Inc., as well as a Tennessee top workplace. We work hard each day and have fun, too, with monthly virtual events, recreational slack channels, and the occasional costumed dance from our CEO. All positions are open to remote work unless otherwise specified in the requirements below. The opportunity As an Analytics Engineer and data modeler within the Business Intelligence team at TechnologyAdvice, you will transform source data into standardized reporting assets to improve business performance and help connect technology buyers and sellers. You will architect source-of-truth data schemas to support business intelligence and enable data-led opportunities. You will create and maintain semantic layers within reporting workflows, driving accuracy and consistency in how business logic is applied. You will work with business intelligence and data science to ensure adoption of standardized reporting tables. You will build production data products that serve as building blocks for predictive models and customer-facing experiences. You will address data quality issues to improve accuracy and increase transparency around upstream failures. You will develop governed production workflows to ensure stability and oversight in reporting processes. You will engineer logical, usable data models to support reporting self-service and adapt to continuously evolving data sources. Success in this role requires the ability to partner effectively with internal stakeholders and develop a deep understanding of the data used to measure and optimize business performance. A positive attitude, attention to detail, and the ability to adapt to changing priorities are essential. If you’re looking for a role where your contributions make a difference and your ideas are welcomed, we want to hear from you. Location: India What You'll Do Own the full lifecycle of data model development, including ideation, prototyping, implementation, refactoring, and deprecation of outdated assets. Develop and maintain semantic data models that serve as the source-of-truth for data customers across the organization. Build common dimension tables to support enterprise reporting use cases and improve data model consistency and maintainability. Document and translate business requirements into data complex models that cover enterprise reporting needs, including marketing attribution and revenue recognition. Standardize data nomenclature and data type conventions and transform legacy data objects to standardized models. Partner with engineering, business intelligence, data science, and other teams to ensure alignment on development priorities and data solutions. Build workflows that maximize the efficiency of data processes while maintaining high standards of data quality, data usability, and performance. Adhere to best practices related to metadata management and metadata reporting. Develop subject matter expertise in specific business areas and data domains, and help educate customers regarding the correct utilization of data objects. Build and maintain production data products that serve as building blocks for business intelligence reporting, predictive data models, and product-led development initiatives. Create and maintain data lineage documentation to improve transparency and auditability of data transformations and dependencies. Implement automated data validation and testing frameworks to ensure data model integrity and trustworthiness. Manage quality assurance workstreams and drive adoption of appropriate incident management frameworks for enterprise reporting. Partner with data engineering to optimize data transformations and scheduled procedures for cost, performance, and reporting schedules. Work directly with business intelligence analysts to enforce the adoption of relevant data models and capture reporting requirements for data model development. Partner with upstream data owners to identify opportunities to improve downstream reporting capabilities, reduce model complexity, and increase data coverage. Participate in agile development processes, including sprint planning, retrospectives, and iterative delivery of data products. Understand stakeholder business objectives and how data and analytics solutions can help internal customers meet their goals. Identify opportunities for data acquisition or data integration projects to improve the value of enterprise data assets. Who You Are Bachelor's or Master's degree in a relevant field such as Computer Science, Information Systems, Data Science or a related discipline. 4-6 years of experience in data engineering, analytics engineering, data modeling, data architecture or data science, preferably in a digital business. Understanding of best practices for designing modular and reusable data structures (e.g. star and snowflake schemas) and implementing conceptual and logical data models Advanced SQL techniques for data transformation, querying, and optimization. Experience working within cloud-based data environments such as Snowflake, Redshift, or BigQuery and managing database procedures and functions. Knowledge of data transformation frameworks and data lineage best practices. Experience building, maintaining, and optimizing ETL/ELT pipelines, using modern tools like dbt, Dagster, Airflow, or similar. Familiarity with version control, CI/CD, and modern development workflows. Experience applying AI to improve work quality and the efficiency of the data model development process. Ability to collaborate cross-functionally with data analysts, engineers, and business stakeholders to understand data needs and translate them into scalable models Knowledge of data governance principles, data quality standards, and regulatory compliance (e.g., GDPR, CCPA) is a plus. Expertise in scripting and automation with experience in object-oriented programming and building scalable frameworks is a plus. Experience building production dashboards using tools such as Tableau, Power BI, or Looker is a plus. Strong attention to detail and a passion for staying updated with industry trends and emerging data management and data transformation technologies. Agile professional who excels in a fast-paced environment and thrives on continuously pivoting strategies to drive business needs forward Please note that, as this is a contract position, no perks or benefits are included with this role. Work authorization Employer work visa sponsorship and support are not provided for this role. Applicants must be currently authorized to work in India at hire and must maintain authorization to work in India throughout their employment with our company. Salary Range We seek to hire top-tier individuals and intend for our compensation to be at a rate that allows us to recruit and retain individuals who align with our core values, purpose, mission, and vision. Final total compensation is based on a multitude of factors including, but not limited to, skill level, relevant experience to the position, and cost of labor. Hourly pay range ₹1,600—₹2,500 INR EOE statement We believe that our differences make us stronger, and thus foster a diverse and inclusive culture where people feel safe being themselves. TechnologyAdvice is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected under federal, state or local law. Pre-employment screening required. TechnologyAdvice does not engage with external staffing agencies. Any candidates introduced by such firms will not be eligible for compensation. Any AI-generated or incomplete application answers will be auto-rejected.
Posted 3 days ago
5.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
Remote
From Fivetran’s founding until now, our mission has remained the same: to make access to data as simple and reliable as electricity. With Fivetran, customer data arrives in their warehouses, canonical and ready to query, with no engineering or maintenance required. We’re proud that more organizations continue to leverage our technology every day to become truly data-driven. About The Role As a Fivetran Enterprise Sales Engineer in India (APAC), you will drive massive transformation and efficiency for customers of all sizes in a booming market. Using your experience with Data Replication/Integration, ETL/ELT, cloud data warehousing, and analytics, you will skillfully guide prospects and partners to faster, more reliable, and secure data integration solutions. You will be directly and enthusiastically supported by a world-class Sales Engineering team, an agile and hungry India Sales team, and a product that “just works.” This is a full-time position based remotely out of India or Singapore with preference for the Bengaluru area. Primarily supporting the Indian GTM team in the Enterprise and Commercial space. Technologies You’ll Use You will leverage a wide variety of internal tools to help demonstrate the value of Fivetran including: Java, Postgres, Oracle, SQL Server, MySQL, Kubernetes, Docker, AWS, GCP, Snowflake, Databricks, BigQuer What You’ll Do Drive pre-sales technical discovery and solutioning discussions with data architects, data engineers, database administrators, systems architects, and other technical professionals at all levels of customer organizations interested in getting more value out from their data Plan, manage, and effectively close out technical demonstrations and proof of concept (POC) projects, often serving as both the subject matter expert and project manager Respond to technical enquiries including Requests for Informations (RFIs), Proposals (RFPs) and similar formal documents Enable Fivetran staff, partner teams, and the data engineering community at large on data concepts, product features, competitive positioning, and best practices; this may take a variety of forms including but not limited to written articles, group presentations, bespoke training sessions, recorded videos, one-to-one coaching, etc. Provide technical guidance and assistance to new customers to ensure successful adoption and scale Skills We’re Looking For A go getter attitude! Willing to show great initiative to get things done with creation solutions At least 5 years working/operational experience with Oracle, Microsoft SQL Server, SAP or similar within an enterprise or providing and supporting end-user database operations as a systems integrator Experienced in Data Replication/Integration (CDC) tools Proficiency in SQL scripting Knowledge in Python/JavaScript and Rest API interaction Hands-on experience with modern databases (PostgreSQL, MariaDB/MySQL, MongoDB) in on-premise and cloud deployments Working knowledge with one or more major cloud providers (AWS, GCP, Azure) Knowledge with modern cloud data warehousing providers ( Snowflake, Redshift, BigQuery or Databricks) and data lake architectures (Kafka, HDFS, S3, ADLS) Experience successfully demonstrating and articulating the value of Business Intelligence and Analytics tools Ability to influence technical audiences in their language, and influence business audiences by translating complex technology concepts Passion for data and analytics, a philosophy of using technology to help solve business problems Keen analytical and problem-solving skills Bonus Skills Knowledge with SAP ECC, SAP Hana would be advantageous Previous experience in a data replication Vendor Perks And Benefits 100% employer-paid medical insurance Generous paid time-off policy (PTO), plus paid sick time, inclusive parental leave policy, holidays, and volunteer days off RSU stock grants* Professional development and training opportunities Company virtual happy hours, free food, and fun team-building activities Monthly cell phone stipend Access to an innovative mental health support platform that offers personalized care and resources in areas such as: therapy, coaching, and self-guided mindfulness exercises for all covered employees and their covered dependents. *May vary by country and worker type - please reach out to your recruiter for more information Click here to learn more about Fivetran's Benefits by Region. We’re honored to be valued at over $5.6 billion, but more importantly, we’re proud of our core values of Get Stuck In, Do the Right Thing, and One Team, One Dream. Read about us in Forbes. Fivetran brings together high-quality talent across the globe to make data access as easy and reliable as electricity for our customers. We value and recognize that our customers benefit from having innovative teams made of people from many backgrounds, experiences, and identities. Fivetran promotes diversity, equity, inclusion & belonging through attracting, recruiting, developing, and retaining a diverse workforce, not only because it is the right thing to do, but because it helps us build a world-class company to better serve our customers, our people and our communities. To learn more about Fivetran’s culture and what it’s like to be part of the team, click here and enjoy our video. To learn more about our candidate privacy policy, you can read our statement here. We are committed to ensuring that all candidates have an equal opportunity to participate in our interview process. If you require accommodations at any stage of the process due to a disability, medical condition, or any other circumstance, please don't hesitate to submit your request by filling out this form. We will work with you to provide reasonable accommodations to facilitate your participation and ensure a fair and accessible interview experience. Your request and any information provided will be kept confidential and will not impact your candidacy. We look forward to hearing from you and accommodating your needs to the best of our ability.
Posted 4 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
JOB_POSTING-3-72996-2 Job Description Role Title: AVP, Cloud Solution Architect (L11) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore. Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization. Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose The Cloud Solution Architect – will play a key role in modernizing SAS workloads by leading vendor refactoring efforts, break-fix execution, and user enablement strategies. This position requires a deep understanding of SAS, AWS analytics services (EMR Studio, S3, Redshift, Glue), and Tableau, combined with strong user engagement, training development, and change management skills. The role involves collaborating with vendors, business users, and cloud engineering teams to refactor legacy SAS code, ensure seamless execution of fixes, and develop comprehensive training materials and user job aids. Additionally, the Cloud Solution Architect will oversee user testing, validation, and sign-offs, ensuring a smooth transition to modern cloud-based solutions while enhancing adoption and minimizing disruption. This is an exciting opportunity to lead cloud migration initiatives, enhance analytics capabilities, and drive user transformation efforts within a cutting-edge cloud environment. Key Responsibilities Lead refactoring efforts to modernize and migrate SAS-based workloads to cloud-native or alternative solutions. Oversee break/fix execution by ensuring timely resolution of system issues and performance optimizations. Engage with end-users to gather requirements, address pain points, and ensure smooth adoption of cloud solutions. Develop and deliver custom training programs, including user job aids and self-service documentation. Facilitate user sign-offs and testing by coordinating validation processes and ensuring successful implementation. Drive user communication efforts related to system changes, updates, and migration timelines. Work closely with AWS teams to optimize EMR Studio, Redshift, Glue, and other AWS services for analytics and reporting. Ensure seamless integration with Tableau and other visualization tools to support business reporting needs. Implement best practices for user change management, minimizing disruption and improving adoption. Required Skills/Knowledge Bachelor’s Degree in Computer Science, Software Engineering, or a related field. Advanced degrees (Master’s or Ph.D.) can be a plus but are not always necessary if experience is significant. Experience in scripting languages (Python, SQL, or PySpark) for data transformations. Proven expertise in SAS, including experience with SAS code refactoring and optimization. Strong AWS experience, particularly with EMR Studio, S3, Redshift, Glue, and Lambda. Experience in user change management, training development, and communication strategies. Desired Skills/Knowledge Experience with AWS cloud services. Certifications in AWS or any other cloud platform. Experience with Agile project management methods and practices. Proficiency in Tableau for analytics and visualization. Hands-on experience with cloud migration projects, particularly SAS workloads. Excellent communication and stakeholder engagement skills. Familiarity with other cloud platforms like Azure or GCP is a plus. Eligibility Criteria 10+ years of experience in data analytics, cloud solutions, or enterprise architecture, with a focus on SAS migration and AWS cloud adoption. or in lieu of a degree 12+ years of experience Work Timings: 3 PM to 12 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L9 + Employees can apply. Level / Grade : 11 Job Family Group Information Technology
Posted 4 days ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Description Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Description: Job Title: ETL Testing Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. Job Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: Analytics skills to understand requirements to develop test cases, understand and manage data, strong SQL skills, hands on testing of data pipelines built using Glue, S3, Redshift and Lambda, collaborate with developers to build automated testing where appropriate, understanding of data concepts like data lineage, data integrity and quality, experience testing financial data is a plus Your future duties and responsibilities Expert level analytical and problem solving skills; able to show flexibility regarding testing. Awareness of Quality Management tools and techniques. Ensures best practice quality assurance of deliverables; understands & works within agreed architectural process; data and organizational frameworks. Advanced communication skills; fluent in English (written/verbal) and local language as appropriate. Open minded; able to share information; transfer knowledge and expertise to team members Required Qualifications To Be Successful In This Role Must have skills: ETL, SQL, Hands on testing of data pipelines, Glue, S3, Redshift, data lineage, data integrity Good to have skills: Experience testing financial data is a plus. Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 4 days ago
4.0 - 11.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hello, Greeting from Quess Corp!! Hope you are doing well we have job opportunity with one of our client Designation_ Data Engineer Location – Gurugram Experience – 4yrs to 11 Yrs Qualification – Graduate / PG ( IT) Skill Set – Data Engineer, Python, AWS, SQL Essential capabilities Enthusiasm for technology, keeping up with latest trends Ability to articulate complex technical issues and desired outcomes of system enhancements Proven analytical skills and evidence-based decision making Excellent problem solving, troubleshooting & documentation skills Strong written and verbal communication skills Excellent collaboration and interpersonal skills Strong delivery focus with an active approach to quality and auditability Ability to work under pressure and excel within a fast-paced environment Ability to self-manage tasks Agile software development practices Desired Experience Hands on in SQL and its Big Data variants (Hive-QL, Snowflake ANSI, Redshift SQL) Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting Experience with Source code control - GitHub, VSTS etc. Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc. Experience with UNIX command-line tools. Exposure to AWS technologies including EMR, Glue, Athena, Data Pipeline, Lambda, etc Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc) Essential Experience It is expected that the role holder will most likely have the following qualifications and experience 4-11 years technical experience (within financial services industry preferred) Technical Domain experience (Subject Matter Expertise in Technology or Tools) Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, Pyspark scripts, in a complex enterprise environment Experience in configuration management using Ansible/Jenkins/GIT Hands on cloud-based solution design, configuration and development experience with Azure and AWS Hands on experience of using AWS Services - S3,EC2, EMR, SNS, SQS, Lambda functions, Redshift Hands on experience Of building Data pipelines to ingest, transform on Databricks Delta Lake platform from a range of data sources - Data bases, Flat files, Streaming etc.. Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Quality engineering development experience (CI/CD – Jenkins, Docker) Experience in Terraform, Kubernetes and Docker Experience with Source Control Tools – Github or BitBucket Exposure to relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures) , PostGres/MySQL Skilled in querying data from a range of data sources that store structured and unstructured data Knowledge or understanding of Power BI (Recommended) Key Accountabilities Design, develop, test, deploy, maintain and improve software Develop flowcharts, layouts and documentation to identify requirements & solutions Write well designed & high-quality testable code Produce specifications and determine operational feasibility Integrate software components into fully functional platform Apply pro-actively & perform hands-on design and implementation of best practice CI/CD Coaching & mentoring of other Service Team members Develop/contribute to software verification plans and quality assurance procedures Document and maintain software functionality Troubleshoot, debug and upgrade existing systems, including participating in DR tests Deploy programs and evaluate customer feedback Contribute to team estimation for delivery and expectation management for scope. Comply with industry standards and regulatory requirements
Posted 4 days ago
3.0 - 6.0 years
7 - 10 Lacs
Hyderabad
Remote
Job Type: C2H (Contract to Hire) Senior Data Engineer. Responsible for implementing and supporting data pipelines from multiple business division source system apps into Solventum Snowflake, and support cloud data platform products. Principle Responsibilities: (essential job duties and responsibilities) Architect end-end solution for pulling source system data from various database systems, cloud apps, and flat files into snowflake data platform. Build ELT/ETL pipelines between source systems and data warehouses leveraging existing corporate tool stack/open-source tools. Interface directly with business and systems subject matter experts to understand pipeline needs and determine effective solutions. Work closely with data architects and senior analysts to identify common data requirements and develop shared solutions Administration and maintenance of cloud/on-prem RHEL/windows servers for patching/upgrades. Support data integration solutions in production systems. Required Skills and Experiences: ELT/ETL and data warehouse background Advanced SQL\Linux programming capabilities Strong python programming knowledge to build custom pipelines using APIs. Experience with at least one of the ELT/ ETL tool stacks – Informatica cloud, Fivetran, HVR, Talend, Glue and so on. Experience with at least one of the cloud service providers – AWS, Azure, GCP Success in a highly dynamic environment with ability to shift priorities with agility Ability to go from whiteboard discussion to code Willingness to explore and implement new ideas and technologies Ability to effectively communicate with technical and non-technical audiences Ability to work independently with minimal supervision Minimum Qualifications: 4+ years experience with ANSI SQL. Snowflake strongly preferred. 4+ years experience in data pipelines and implementation with any ELT/ETL tool stack. IICS/Fivetran strongly preferred. 3+ years experience with Python\Linux scripting 4+ years experience working directly with subject matter experts in both business and technology domains 3+ years implementing solutions in cloud - AWS, Azure, GCP. AWS strongly preferred. Nice-to-have: Experience with Machine Learning tools and processes 2+ years experience with BI and analytic tools. PowerBI or Tableau. ERP knowledge – SAP S/4 HANA, Oracle EBS Big data tools and technologies – Spark, Kafka, Redshift, Databricks and so on. Education: Bachelor’s in computer science, Information Systems, Engineering, science discipline, or similar.
Posted 4 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Updraft. Helping you make changes that pay off. Updraft is an award winning, FCA-authorised, high-growth fintech based in London. Our vision is to revolutionise the way people spend and think about money, by automating the day to day decisions involved in managing money and mainstream borrowings like credit cards, overdrafts and other loans. A 360 degree spending view across all your financial accounts (using Open banking) A free credit report with tips and guidance to help improve your credit score Native AI led personalised financial planning to help users manage money, pay off their debts and improve their credit scores Intelligent lending products to help reduce cost of credit We have built scale and are getting well recognised in the UK fintech ecosystem. 800k+ users of the mobile app that has helped users swap c £500 m of costly credit-card debt for smarter credit, putting hundreds of thousands on a path to better financial health The product is highly rated by our customers. We are rated 4.8 on Trustpilot, 4.8 on the Play Store, and 4.4 on the iOS Store We are selected for Technation Future Fifty 2025 - a program that recognizes and supports successful and innovative scaleups to IPOs - 30% of UK unicorns have come out of this program. Updraft once again featured on the Sifted 100 UK startups - among only 25 companies to have made the list over both years 2024 and 2025 We are looking for exceptional talent to join us on our next stage of growth with a compelling proposition - purpose you can feel, impact you can measure, and ownership you'll actually hold. Expect a hybrid, London-hub culture where cross-functional squads tackle real-world problems with cutting-edge tech; generous learning budgets and wellness benefits; and the freedom to experiment, ship, and see your work reflected in customers' financial freedom. At Updraft, you'll help build a fairer credit system. Role And Responsibilities Join our Analytics team to deliver cutting edge solutions. Support business and operation teams on making better data driven decisions by ingesting new data sources, creating intuitive dashboards and producing data insights Build new data processing workflows to extract data from core systems for analytic products Maintain and improve existing data processing workflows Contribute to optimizing and maintaining the production data pipelines, including system and process improvements Contribute to the development of analytical products and dashboards with integration of internal and third-party data sources/ APIs Contribute to cataloguing and documentation of data Requirements Bachelor's degree in mathematics, statistics, computer science or related field 2-5 years experience working in data engineering/analyst and related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working without direct supervision on outcomes that have a direct impact on the business You are curious about the data and have a desire to ask "why?" Good to have but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Benefits Opportunities to Take Ownership - Work on high-impact projects with real autonomy Fast Career Growth - Gain exposure to multiple business areas and advance quickly Be at the Forefront of Innovation - Work on cutting-edge technologies or disruptive ideas Collaborative & Flat Hierarchy - Work closely with leadership and have a real voice Dynamic, Fast-Paced Environment - No two days are the same; challenge yourself every day A Mission-Driven Company - Be part of something that makes a difference
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough