Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
10 - 15 Lacs
Noida
Work from Office
Key Responsibilities: Data Testing Strategy & Execution: Design, develop, and execute comprehensive test plans and test cases for data-centric applications, ETL processes, data warehouses, data lakes, and reporting solutions. SQL-Driven Validation: Utilize advanced SQL queries to perform complex data validation, data reconciliation, data integrity checks, and data quality assurance across various financial data sources. ETL Testing: Conduct thorough testing of ETL (Extract, Transform, Load) processes, ensuring data is accurately extracted, transformed according to business rules, and loaded correctly into target systems. Data Quality Assurance: Implement and monitor data quality checks, identify data discrepancies, anomalies, and inconsistencies, and work with development and business teams to resolve issues. Performance Testing (Data Focus): Contribute to performance testing efforts for data pipelines and database operations, ensuring optimal query and data load performance. Test Data Management: Create and manage robust test data sets for various testing phases, including positive, negative, and edge case scenarios. Defect Management: Identify, document, track, and re-test defects in data, collaborating closely with development and data engineering teams for timely resolution. Documentation & Reporting: Maintain clear and concise documentation of test plans, test cases, test results, and data quality reports. Provide regular status updates to stakeholders. Collaboration: Work effectively with business analysts, data architects, data engineers, and project managers to understand data flows, business requirements, and ensure data quality standards are met. Process Improvement: Proactively identify opportunities for process improvements in data testing methodologies and tools. Global Team Collaboration: Provide consistent overlap with EST working hours (until noon EST) to facilitate effective communication and collaboration with US-based teams. Required Skills & Experience: Experience: 8-10 years of hands-on experience in Data Quality Assurance, Data Testing, or ETL Testing roles. SQL Expertise: o Advanced proficiency in SQL: Ability to write complex queries, subqueries, analytical functions (Window functions), CTEs, and stored procedures for data validation, reconciliation, and analysis. o Experience with various SQL databases (e.g., SQL Server, Oracle, PostgreSQL, MySQL, Snowflake, BigQuery). o Strong understanding of database concepts: normalization, indexing, primary/foreign keys, and data types. Data Testing Methodologies: Solid understanding of data warehousing concepts, ETL processes, and various data testing strategies (e.g., source-to-target mapping validation, data transformation testing, data load testing, data completeness, data accuracy). Domain Expertise: o Strong understanding and proven experience in Risk and Finance IT domain: Familiarity with financial data (e.g., trading data, market data, risk metrics, accounting data, regulatory reporting). o Knowledge of financial products, regulations, and risk management concepts. Analytical & Problem-Solving Skills: Excellent ability to analyze complex data sets, identify root causes of data issues, and propose effective solutions. Communication: Strong verbal and written communication skills to articulate data issues and collaborate with diverse teams. Mandatory Competencies ETL - ETL - Tester QA/QE - QA Automation - ETL Testing Database - PostgreSQL - PostgreSQL Beh - Communication Database - Sql Server - SQL Packages
Posted 1 week ago
7.0 - 12.0 years
11 - 15 Lacs
Noida
Work from Office
Primary Skill(s): Lead Data Visualization Engineer with experience in Sigma BI Experience: 7+ Years in of experience in Data Visualization with experience in Sigma BI, PowerBI, Tableau or Looker Job Summary: Lead Data Visualization Engineer with deep expertise in Sigma BI and a strong ability to craft meaningful, insight-rich visual stories for business stakeholders. This role will be instrumental in transforming raw data into intuitive dashboards and visual analytics, helping cross-functional teams make informed decisions quickly and effectively. Key Responsibilities: Lead the design, development, and deployment of Sigma BI dashboards and reports tailored for various business functions. Translate complex data sets into clear, actionable insights using advanced visualization techniques. Collaborate with business stakeholders to understand goals, KPIs, and data requirements. Build data stories that communicate key business metrics, trends, and anomalies. Serve as a subject matter expert in Sigma BI and guide junior team members on best practices. Ensure visualizations follow design standards, accessibility guidelines, and performance optimization. Partner with data engineering and analytics teams to source and structure data effectively. Conduct workshops and training sessions to enable business users in consuming and interacting with dashboards. Drive the adoption of self-service BI tools and foster a data-driven decision-making culture. Required Skills & Experience: 7+ years of hands-on experience in Business Intelligence, with at least 2 years using Sigma BI. Proven ability to build end-to-end dashboard solutions that tell a story and influence decisions. Strong understanding of data modeling, SQL, and cloud data platforms (Snowflake, BigQuery, etc.). Demonstrated experience working with business users, gathering requirements, and delivering user-friendly outputs. Proficient in data storytelling, UX design principles, and visualization best practices. Experience integrating Sigma BI with modern data stacks and APIs is a plus. Excellent communication and stakeholder management skills. Preferred Qualifications: Experience with other BI tools (such as Sigma BI, Tableau, Power BI, Looker) is a plus. Familiarity with AWS Cloud Data Ecosystems (AWS Databricks). Background in Data Analysis, Statistics, or Business Analytics. Working Hours: 2 PM 11 PM IST [~4.30 AM ET 1.30 PM ET]. Communication skills: Good Mandatory Competencies BI and Reporting Tools - BI and Reporting Tools - Power BI BI and Reporting Tools - BI and Reporting Tools - Tableau Database - Database Programming - SQL Cloud - GCP - Cloud Data Fusion, Dataproc, BigQuery, Cloud Composer, Cloud Bigtable Data Science and Machine Learning - Data Science and Machine Learning - Databricks Cloud - AWS - ECS DMS - Data Analysis Skills Beh - Communication and collaboration
Posted 1 week ago
8.0 - 12.0 years
22 - 32 Lacs
Noida, Pune, Bengaluru
Hybrid
Build and Optimize ELT/ETL Pipelines using BigQuery, GCS, Dataflow, PubSub and Orchestration services Composer/Airflow • Hands-On experience in building ETL/ELT Pipelines with developing software code in Python • Experience in working with data warehouses, data warehouse technical architectures, reporting/analytic tools • Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data • Demonstrate extensive skills and success in the implementation of technology projects within a professional environment, with a particular focus on data engineering • Eager to learn and explore new services within GCP to enhance skills and contribution to Projects • Demonstrated excellent communication, presentation, and problem-solving skills. • Prior Experience in ETL tool such as DBT,Talend Etc Good to have skills • AI/ML,Gen AI Backgroud • IAM, Cloud Logging and Monitoring • The Data Engineer coaches the junior data engineering personnel position by bringing them up to speed and help them get better understanding of overall Data ecosystem. • Working Experience with Agile methodologies and CI/CD Tools like Terraform/Jenkins • Working on Solution deck, IP build, client meetings on requirement gathering
Posted 1 week ago
8.0 - 13.0 years
22 - 37 Lacs
Chennai, Bengaluru
Hybrid
Experience: 8+ years of experience in data architecture and data engineering roles. Proven experience leading large-scale data migration projects, preferably to cloud environments (Alibaba Cloud, AWS, Azure, or GCP). 3+ years of hands-on experience with Alibaba Clouds DataWorks platform or similar data management tools. Strong background in data modeling, ETL design, and data integration across various platforms. Technical Skills: Deep understanding of cloud architecture, particularly in Alibaba Cloud’s ecosystem (MaxCompute, DataWorks, OSS, etc.). Proficiency in SQL, Python, Java, or Scala for data engineering tasks. Familiarity with data processing engines such as Apache Spark, Flink, or other big data tools. Experience with data governance tools and practices, including data cataloging, data lineage, and metadata management. Strong understanding of data integration and movement between different storage systems (databases, data lakes, data warehouses). Strong understanding of API integration for data ingestion, including RESTful services and streaming data. Experience in data migration strategies, tools, and frameworks for moving data from legacy systems (on-premises) to cloud-based solutions. Communication & Leadership: Excellent communication skills to collaborate with both technical teams and business stakeholders. Proven ability to lead, mentor, and guide technical teams during complex projects.
Posted 1 week ago
5.0 - 10.0 years
8 - 18 Lacs
Hyderabad
Work from Office
Role: GCP Data Engineer Location: Hyderabad Duration: Full time Roles & Responsibilities: * Design, develop, and maintain scalable and reliable data pipelines using Apache Airflow to orchestrate complex workflows. * Utilize Google BigQuery for large-scale data warehousing, analysis, and querying of structured and semi-structured data. * Leverage the Google Cloud Platform (GCP) ecosystem, including services like Cloud Storage, Compute Engine, AI Platform, and Dataflow, to build and deploy data science solutions. * Develop, train, and deploy machine learning models to solve business problems such as forecasting, customer segmentation, and recommendation systems. * Write clean, efficient, and well-documented code in Python for data analysis, modeling, and automation. * Use Docker to containerize applications and create reproducible research environments, ensuring consistency across development, testing, and production. * Perform exploratory data analysis to identify trends, patterns, and anomalies, and effectively communicate findings to both technical and non-technical audiences. * Collaborate with data engineers to ensure data quality and integrity. * Stay current with the latest advancements in data science, machine learning, and big data technologies.
Posted 1 week ago
5.0 - 10.0 years
11 - 16 Lacs
Nagpur, Pune
Work from Office
JOB DESCRIPTION Off-Shore Contract Data Engineering Role that MUST work out of an approved Clean Room facility. The role is part of an Agile Team in support of Financial Crimes data platforms and strategies, including but not limited to their use of SAS Grid and SnowFlake. JOB SUMMARY Handle the design and construction of scalable data management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used. Primary Responsibilities: Design, construct, install, test and maintain data management systems. Build high-performance algorithms, predictive models, and prototypes. Ensure that all systems meet the business/company requirements as well as industry practices. Integrate up-and-coming data management and software engineering technologies into existing data structures. Develop set processes for data mining, data modeling, and data production. Create custom software components and analytics applications. Research new uses for existing data. Employ an array of technological languages and tools to connect systems together. Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the projects goals. Install/update disaster recovery procedures. Recommend different ways to constantly improve data reliability and quality. Maintaining up-to-date knowledge, support, and training documentation QUALIFICATIONS Technical Degree or related work experience Proficiency and Technical Skills Relating to: SQL, MySQL, DBT, SnowFlake, and SAS Exposure and experience with: ETL (DataStage), Scripting (Python, Java Script, Etc), Version Controls (Git), Highly Regulated Environments (Banking, Health Care, Etc).
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
maharashtra
On-site
Games2win is a leading global mobile games publisher with over 500 million downloads and 5 million monthly active players. The company prides itself on creating its own Intellectual Property (IP) and leveraging consumer loyalty to promote its games, resulting in a significant number of downloads without the need for advertising or paid installations. Some of Games2win's popular titles include Driving Academy, Decisions, and International Fashion Stylist. To know more about the company, visit https://games2winmedia.com/company-profile/. As a Data Analyst (SQL) based in Mumbai with hybrid work arrangements, you will report to the Manager Analytics. The ideal candidate should have a minimum of 5 years of experience. **Role and Responsibilities:** - Assist the Business Analytics team in gathering and organizing data from various sources. - Generate reports following the team's specified format. - Provide technical guidance to Associate Analysts. - Stay updated on new game features to ensure report templates are current. - Create reports promptly and adhere to the reporting schedule. - Ensure all necessary data points are being collected by the tools used. - Conduct initial comparisons between new and previous reports to identify significant data changes. **Background and Experience:** - Proficiency in using BigQuery for data extraction and analysis. - Minimum of 1 year of experience in a technical or analytics role. - Strong analytical skills, effective communication, and interpersonal abilities. - Comfortable working with EXCEL and SQL. - Familiarity with data visualization tools such as Power BI, Tableau, Google Studio, etc. - Ability to manage multiple tasks, maintain accuracy, and work efficiently in a fast-paced, deadline-driven environment. **Educational Qualification:** - Holds a Graduate/Diploma Degree in Science/Commerce fields or equivalent educational background.,
Posted 1 week ago
6.0 - 10.0 years
0 - 0 Lacs
maharashtra
On-site
Job Description About the Job WonDRx (pronounced as Wonder-Rx) is an innovative and disruptive technology platform in healthcare, aiming to connect patients, doctors, and the entire healthcare ecosystem on a single platform. We are looking for a Data Analytics and Research Manager (AI-driven) to lead analytics and insights strategy aligned with our fast-growing product and business goals. This person will manage data pipelines, apply AI/ML models, perform healthcare research, and build a small but high-performing analytics team. Key Responsibilities Define and lead the data and analytics roadmap. Design and manage health data pipelines, dashboards, and KPIs. Apply ML/NLP for patient behavior prediction and analytics automation. Conduct market and competitor research to support business strategies. Collaborate across teams and present insights to CXOs. Mentor a data analytics team ensuring accuracy and impact. Tools & Technologies Languages: SQL, Python/R AI/ML: scikit-learn, TensorFlow BI Tools: Power BI, Tableau, Looker Cloud Stack: BigQuery, Snowflake, AWS, Databricks GenAI Tools: ChatGPT, Copilot, Custom LLMs Qualifications Bachelors/Masters in Data Science, Statistics, Engineering, or related. 6-10 years in analytics with at least 2+ years in a leadership role. Strong business acumen, preferably in healthcare/life sciences. Hands-on AI/ML experience. Excellent communication and storytelling skills. Join us to transform the healthcare experience for millions.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a Sales Excellence - COE - Advanced Modeling Specialist at Accenture, you will be part of the team that empowers individuals to compete, win, and grow by providing the necessary tools to enhance client portfolios, optimize deals, and nurture sales talent through sales intelligence. In this role, you will utilize your expertise in machine learning algorithms, SQL, R or Python, Advanced Excel, and data visualization tools like Power Bi, Power Apps, Tableau, QlikView, and Google Data Studio. Additionally, knowledge of Google Cloud Platform (GCP) and BigQuery, experience in Sales, Marketing, Pricing, Finance or related fields, familiarity with Salesforce Einstein Analytics, and understanding of optimization techniques and packages such as Pyomo, SciPy, PuLP, Gurobi, CPLEX or similar would be advantageous. Your responsibilities will include generating business insights to enhance processes, increase sales, and maximize profitability for Accenture. You will develop models and scorecards to aid business leaders in comprehending trends and market drivers, collaborating with operations teams to transform insights into user-friendly solutions, and managing tool access and monitoring usage. To excel in this role, you should have a Bachelor's degree or equivalent experience, exceptional English communication skills, a minimum of four years of experience in data modeling and analysis, proficiency in machine learning algorithms, SQL, R or Python, Advanced Excel, and data visualization tools, project management experience, strong business acumen, and meticulous attention to detail. Furthermore, a sound understanding of sales processes and systems, prior experience with Google Cloud Platform (GCP) and BigQuery, working knowledge of Salesforce Einstein Analytics, familiarity with optimization techniques and packages, and experience with Power Apps would be beneficial. This position requires a minimum of 2 years of relevant experience and a Master's degree in Analytics or a similar field. Join us at Accenture, where we are committed to driving sales excellence and enabling you to leverage your analytical skills to make a meaningful impact.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
As a Senior Product Manager, FinTech at Priceline, you will play a crucial role in contributing to the product strategy, development, and execution of Financial Technology products across all product lines, including flights, hotels, rental cars, and packages. Your enthusiasm and passion will drive you to work closely with various stakeholders within the organization to understand requirements, create detailed product plans, and ensure the successful delivery and launch of FinTech products that add value for both customers and internal teams. Your innovative mindset will allow you to craft and communicate a compelling vision and define solutions in a fast-paced, collaborative environment with colleagues across Priceline's global offices. In this role, you will collaborate with stakeholders from Commercial Teams, Finance, Technology, Accounting, and Financial Planning & Analysis to bring products to market. You will be responsible for defining product requirements, creating detailed product plans, and overseeing the successful delivery and launch of cross-functional FinTech solutions. Your role will also involve researching, troubleshooting, diagnosing, and recommending solutions to complex business and technical problems. Working closely with engineering teams, you will groom, refine, develop, test, and launch new solutions while assisting in prioritizing features and bugs. As a subject matter expert on product trends, emerging technologies, and competitor offerings in the FinTech space, you will leverage insights to advise product strategy and drive innovation. You will be expected to be hands-on, involving yourself in various tasks from planning for the next quarter to diving into database records or inspecting API responses to assist the development team in troubleshooting issues. Additionally, you will foster a culture of collaboration, continuous improvement, and customer-centricity within the Finance Technology team and across the organization. The ideal candidate for this position holds a Bachelor's degree, with an MBA being desirable. You should have 6-8 years of consumer-facing internet product management experience, preferably in defining and driving consumer-facing products, with prior experience in ecommerce and financial services industries being preferred. Strong analytical and quantitative skills are essential, along with the ability to synthesize data and metrics to evaluate assumptions and outcomes. An understanding of the travel landscape, experience with financial services, and familiarity with reconciliation, accounting, and financial systems are advantageous. Your intellectual curiosity, self-starting nature, exceptional collaboration and communication skills, and enthusiasm for both strategic planning and daily execution are key attributes for success in this role. Priceline values integrity and ethics, and as a member of the Finance Technology team, you will be expected to embody the company's core values of Customer, Innovation, Team, Accountability, and Trust. If you are looking to be part of a dynamic, innovative, and inclusive environment where your contributions are valued, Priceline may be the perfect fit for you.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
bhubaneswar
On-site
You are a Senior Data Engineer with 7 to 11 years of experience who is proficient in building and managing scalable, secure, and high-performance data pipelines and storage solutions. Your primary focus will be on Microsoft Azure data services. In Requirement 1 (Azure Focus), your key responsibilities will include designing and implementing robust data pipelines using Azure Data Factory, Azure Data Lake, and other Azure services. You will also develop and optimize large-scale data processing using Databricks and PySpark. Additionally, you will work with both relational (SQL) and NoSQL databases, write clean code in Python and/or Scala, and collaborate with cross-functional teams. To excel in this role, you must have 7-11 years of professional experience as a Data Engineer, deep expertise with Azure Data Services, solid experience with Databricks and PySpark, proficiency in SQL and NoSQL databases, and a strong background in Python, Scala, and object-oriented programming. In Requirement 2 (GCP & Databricks Specialist), you will be expected to build, maintain, and optimize scalable data pipelines using Databricks and Google Cloud (BigQuery, Cloud Storage, etc.). You will collaborate with analytics and data science teams, perform exploratory data analysis (EDA), and ensure best practices in data modeling, governance, and lifecycle management. For this role, you should have 7-11 years of professional experience in data engineering, expert-level proficiency in Databricks, BigQuery, and Python, strong SQL skills with large datasets, exposure to data science concepts and EDA methodologies, and familiarity with CI/CD processes and version control systems.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are looking for a detail-oriented and technically proficient BigQuery Project Administrator with at least 3 years of experience in Google Cloud Platform (GCP), particularly in BigQuery. As a BigQuery Project Administrator, your primary responsibilities will involve overseeing project and cost governance, as well as driving performance and cost optimization initiatives within the BigQuery environment. Your key responsibilities will include: - Optimizing and performance tuning by analyzing query patterns, access logs, and usage metrics to propose schema optimizations, partitioning, clustering, or materialized views. - Identifying opportunities for improving BigQuery query performance and reducing storage/computational costs. - Collaborating with engineering teams to refactor inefficient queries and optimize workloads. In addition, you will be responsible for: - Monitoring and managing BigQuery project structures, billing accounts, configurations, quotas, resource usage, and hierarchies. - Implementing and enforcing cost control policies, quotas, and budget alerts. - Serving as a liaison between engineering and finance teams for BigQuery-related matters. - Defining and promoting BigQuery usage standards and best practices while ensuring compliance with data governance, security, and privacy policies. To qualify for this role, you should have: - At least 3 years of experience working with Google Cloud Platform (GCP), specifically in BigQuery. - A strong understanding of SQL, data warehousing concepts, and cloud cost management. - Experience with GCP billing, IAM, and resource management. Preferred certifications for this position include: - Google Cloud Professional Data Engineer - Google Cloud Professional Cloud Architect If you meet these qualifications and are eager to contribute to a dynamic team environment, we encourage you to apply for the BigQuery Project Administrator position.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an AI/ML Engineer, you will be responsible for identifying, defining, and delivering AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Your role will involve designing, developing, and deploying models using Google Cloud's Vertex AI platform. You will be tasked with fine-tuning and evaluating Large Language Models (LLMs) for domain-specific applications and ensuring responsible AI practices and governance in solution delivery. Collaboration with data engineers and architects is essential to ensure robust and scalable pipelines. It will be your responsibility to document workflows and experiments for reproducibility and handover readiness. Your expertise in supervised, unsupervised, and reinforcement learning will be applied to develop solutions using Vertex AI features including AutoML, Pipelines, Model Registry, and Generative AI Studio. In this role, you will work on GenAI workflows, which includes prompt engineering, fine-tuning, and model evaluation. Proficiency in Python is required for developing in ML frameworks such as TensorFlow, PyTorch, scikit-learn, and Hugging Face Transformers. Effective communication and collaboration across product, data, and business teams are crucial for the success of the projects. The ideal candidate should have hands-on experience with Vertex AI on GCP for model training, deployment, endpoint management, and MLOps. Practical knowledge of PaLM, Gemini, or other LLMs via Vertex AI or open-source tools is preferred. Proficiency in Python for ML pipeline scripting, data preprocessing, and evaluation is necessary. Expertise in ML/GenAI libraries like scikit-learn, TensorFlow, PyTorch, Hugging Face, and LangChain is expected. Experience with CI/CD for ML, containerization using Docker/Kubernetes, and familiarity with GCP services like BigQuery, Cloud Functions, and Cloud Storage are advantageous. Knowledge of media datasets and real-world ML applications in OTT, DTH, and Web platforms will be beneficial in this role. Qualifications required for this position include a Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or related fields. The candidate should have at least 3 years of hands-on experience in ML/AI or GenAI projects. Any relevant certifications in ML, GCP, or GenAI technologies are considered a plus.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
Beinex is seeking a skilled and motivated Google Cloud Consultant to join our dynamic team. As a Google Cloud Consultant, you will play a pivotal role in assisting our clients in harnessing the power of Google Cloud technologies to drive innovation and transformation. If you are passionate about cloud solutions, client collaboration, and cutting-edge technology, we invite you to join our journey. Responsibilities - Collaborate with clients to understand their business objectives and technology needs, translating them into effective Google Cloud solutions - Design, implement, and manage Google Cloud Platform (GCP) architectures, ensuring scalability, security, and performance - Provide technical expertise and guidance to clients on GCP services, best practices, and cloud-native solutions and adopt an Infrastructure as Code (IaC) approach to establish an advanced infrastructure for both internal and external stakeholders - Conduct cloud assessments and create migration strategies for clients looking to transition their applications and workloads to GCP - Work with cross-functional teams to plan, execute, and optimise cloud migrations, deployments, and upgrades - Assist clients in optimising their GCP usage by analysing resource utilisation, recommending cost-saving measures, and enhancing overall efficiency - Collaborate with development teams to integrate cloud-native technologies and solutions into application design and development processes - Stay updated with the latest trends, features, and updates in the Google Cloud ecosystem and provide thought leadership to clients - Troubleshoot and resolve technical issues related to GCP services and configurations - Create and maintain documentation for GCP architectures, solutions, and best practices - Conduct training sessions and workshops for clients to enhance their understanding of GCP technologies and usage Key Skills Requirements - Profound expertise in Google Cloud Platform services, including but not limited to Compute Engine, App Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, Cloud Functions, VPC, IAM, and Cloud Security - Strong understanding of GCP networking concepts, including VPC peering, firewall rules, VPN, and hybrid cloud configurations - Experience with Infrastructure as Code (IaC) tools such as Terraform, Deployment Manager, or Google Cloud Deployment Manager - Hands-on experience with containerisation technologies like Docker and Kubernetes - Proficiency in scripting languages such as Python and Bash - Familiarity with cloud monitoring, logging, and observability tools and practices - Knowledge of DevOps principles and practices, including CI/CD pipelines and automation - Strong problem-solving skills and the ability to troubleshoot complex technical issues - Excellent communication skills to interact effectively with clients, team members, and stakeholders - Previous consulting or client-facing experience is a plus - Relevant Google Cloud certifications are highly desirable Perks: Careers at Beinex - Comprehensive Health Plans - Learning and development - Workation and outdoor training - Hybrid working environment - On-site travel Opportunity - Beinex Branded Merchandise,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Data Engineer who will be responsible for leading the end-to-end migration of the data analytics and reporting environment to Looker at Frequence. Your role will involve designing scalable data models, translating business logic into LookML, and empowering teams across the organization with self-service analytics and actionable insights. You will collaborate closely with stakeholders from data, engineering, and business teams to ensure a smooth transition to Looker, establish best practices for data modeling, governance, and dashboard development. Your responsibilities will include: - Leading the migration of existing BI tools, dashboards, and reporting infrastructure to Looker - Designing, developing, and maintaining scalable LookML data models, dimensions, measures, and explores - Creating intuitive, actionable, and visually compelling Looker dashboards and reports - Collaborating with data engineers and analysts to ensure consistency across data sources - Translating business requirements into technical specifications and LookML implementations - Optimizing SQL queries and LookML models for performance and scalability - Implementing and managing Looker's security settings, permissions, and user roles in alignment with data governance standards - Troubleshooting issues and supporting end users in their Looker adoption - Maintaining version control of LookML projects using Git - Advocating for best practices in BI development, testing, and documentation You should have: - Proven experience with Looker and deep expertise in LookML syntax and functionality - Hands-on experience building and maintaining LookML data models, explores, dimensions, and measures - Strong SQL skills, including complex joins, aggregations, and performance tuning - Experience working with semantic layers and data modeling for analytics - Solid understanding of data analysis and visualization best practices - Ability to create clear, concise, and impactful dashboards and visualizations - Strong problem-solving skills and attention to detail in debugging Looker models and queries - Familiarity with Looker's security features and data governance principles - Experience using version control systems, preferably Git - Excellent communication skills and the ability to work cross-functionally - Familiarity with modern data warehousing platforms (e.g., Snowflake, BigQuery, Redshift) - Experience migrating from legacy BI tools (e.g., Tableau, Power BI, etc.) to Looker - Experience working in agile data teams and managing BI projects - Familiarity with dbt or other data transformation frameworks At Frequence, you will be part of a dynamic, diverse, innovative, and friendly work environment that values creativity and collaboration. The company embraces differences and believes they drive creativity and innovation. The team consists of individuals from varied backgrounds who are all trail-blazing team players, thinking big and aiming to make a significant impact. Please note that third-party recruiting agencies will not be involved in this search.,
Posted 1 week ago
5.0 - 8.0 years
15 - 20 Lacs
Pune
Hybrid
We have an opening for Java GCP at Pune only. Please Let me know, if you fine for any of the location, will process your profile immediately. Experience: 5-8Years Notice Period: 0-30Days Mandatory skills : Java - spring boot, GCP Pub/sub, Eventos, Big data, Bigtable, BigQuery, Composer/Airflow
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. Wayfair Data Science is the engine that powers an enterprise obsessed with data. The Wayfair websites generate over 100M clicks from the millions of customers that visit our sites every day to discover and purchase home goods. The Customer Tech Data Science team is focused on understanding and optimizing customer behavior on the website and mobile application as a key enabler for the company to move fast and iterate quickly on big business problems. At their core, the Customer Tech Data Science team at Wayfair are strong in quantitative analysis, enjoy coding but also want to balance that with their interest in business and applying advanced modeling techniques. They think critically to tackle complex challenges, thrive in a fast-paced environment and are seeking a high-growth opportunity where they will have an immediate impact on day one. There are significant opportunities for new team members to emerge as leaders, taking on additional projects and responsibilities with strong performance. As a Data Scientist at Wayfair, you will play a critical role in uncovering insights that shape strategic decisions and improve business performance. You will conduct in-depth analysis across large-scale datasets (clickstream, sales, product, logistics, and customer data) to uncover trends, gaps, and business opportunities. You will champion the use of emerging technologies to transform how data is explored and insights are uncovered. Additionally, you will become a subject matter expert on key business metrics and customer behavior, drive innovation in business analysis, collaborate with cross-functional partners, design and interpret A/B tests, and build scalable dashboards and reports using various tools. To be successful in this role, you should have a Bachelor's degree in a quantitative discipline, 2-4 years of work experience in analytics or data science, proficiency in programming languages like Python, R, or SAS, strong knowledge of SQL, experience in conducting quantitative analyses on complex datasets, familiarity with data visualization software, and excellent analytical, communication, and problem-solving skills. Experience in e-commerce or retail analytics is a plus. Wayfair is a global online destination for home goods, committed to industry-leading technology and creative problem-solving. Join us for rapid growth, continuous learning, and dynamic challenges that will define the most rewarding work of your career. Your personal data is processed in accordance with Wayfair's Candidate Privacy Notice. For any privacy-related questions or requests, please contact dataprotectionofficer@wayfair.com.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be joining as a GCP Data Architect at TechMango, a rapidly growing IT Services and SaaS Product company located in Madurai and Chennai. With over 12 years of experience, you are expected to start immediately and work from the office. TechMango specializes in assisting global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. In this role, you will be leading data modernization efforts for a prestigious client, Livingston, in a highly strategic project. As a GCP Data Architect, your primary responsibility will be to design and implement scalable, high-performance data solutions on Google Cloud Platform. You will collaborate closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: - Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) - Define data strategy, standards, and best practices for cloud data engineering and analytics - Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery - Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) - Architect data lakes, warehouses, and real-time data platforms - Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) - Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers - Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards - Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: - 10+ years of experience in data architecture, data engineering, or enterprise data platforms - Minimum 3-5 years of hands-on experience in GCP Data Service - Proficient in: BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner - Python / Java / SQL - Data modeling (OLTP, OLAP, Star/Snowflake schema) - Experience with real-time data processing, streaming architectures, and batch ETL pipelines - Good understanding of IAM, networking, security models, and cost optimization on GCP - Prior experience in leading cloud data transformation projects - Excellent communication and stakeholder management skills Preferred Qualifications: - GCP Professional Data Engineer / Architect Certification - Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics - Exposure to AI/ML use cases and MLOps on GCP - Experience working in agile environments and client-facing roles What We Offer: - Opportunity to work on large-scale data modernization projects with global clients - A fast-growing company with a strong tech and people culture - Competitive salary, benefits, and flexibility - Collaborative environment that values innovation and leadership,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer for our data-rich e-commerce platform catering to the life sciences sector, your primary responsibility will be to support infrastructure, develop data pipelines, and deploy pricing logic. You will play a crucial role in ensuring the usability and interface design of internal tools that facilitate experimentation, pricing configuration, and real-time monitoring. Your key responsibilities will include: - Building and maintaining ETL pipelines for pricing, shipping, and behavioral datasets - Collaborating with data scientists and product managers to facilitate model development and experimentation - Developing APIs or backend logic to implement dynamic pricing algorithms - Creating internal dashboards or tools with a strong focus on usability and performance - Ensuring data quality, reliability, and documentation across all systems - Performing feature engineering to support predictive and optimization algorithms - Aggregating and transforming high-dimensional datasets at scale to enhance modeling efficiency and robustness - Optimizing algorithm performance for real-time and large-scale deployment To excel in this role, you must possess: - Flexibility to thrive in a dynamic, startup-like environment and tackle diverse tasks with innovative solutions - 3+ years of experience in data engineering or backend development - Proficiency in Databricks and distributed data processing frameworks - Strong skills in Python, SQL, and cloud-based platforms such as AWS, BigQuery, and Snowflake - Demonstrated expertise in designing user-friendly internal tools and interfaces - Familiarity with experimentation systems and monitoring infrastructure - Experience in efficiently handling large-scale, high-dimensional datasets - Preferred domain knowledge in e-commerce, with a strong advantage for familiarity with the pharmaceutical or scientific supply sector This is a contract role with the potential for conversion to full-time, starting from August to December 2025. The preferred location for this position is Bangalore, with alternatives in Mumbai and Kathmandu. If you are looking to contribute to a cutting-edge platform and drive impactful changes in the life sciences industry, we welcome your application.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
DXFactor is a US-based tech company working with customers globally. We are a certified Great Place to Work and currently seeking candidates for the role of Data Engineer with 4 to 6 years of experience. Our presence spans across the US and India, specifically in Ahmedabad. As a Data Engineer at DXFactor, you will be expected to specialize in SnowFlake, AWS, and Python. Key Responsibilities: - Design, develop, and maintain scalable data pipelines for both batch and streaming workflows. - Implement robust ETL/ELT processes to extract data from diverse sources and load them into data warehouses. - Build and optimize database schemas following best practices in normalization and indexing. - Create and update documentation for data flows, pipelines, and processes. - Collaborate with cross-functional teams to translate business requirements into technical solutions. - Monitor and troubleshoot data pipelines to ensure optimal performance. - Implement data quality checks and validation processes. - Develop and manage CI/CD workflows for data engineering projects. - Stay updated with emerging technologies and suggest enhancements to existing systems. Requirements: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 4+ years of experience in data engineering roles. - Proficiency in Python programming and SQL query writing. - Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). - Familiarity with data warehousing technologies such as Snowflake, Redshift, and BigQuery. - Demonstrated ability in constructing efficient and scalable data pipelines. - Practical knowledge of batch and streaming data processing methods. - Experience in implementing data validation, quality checks, and error handling mechanisms. - Work experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight). - Understanding of various data architectures including data lakes, data warehouses, and data mesh. - Proven ability to debug complex data flows and optimize underperforming pipelines. - Strong documentation skills and effective communication of technical concepts.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You should have 6-10 years of hands-on experience in Java development, focusing on building robust data processing components. Your proficiency should include working with Google Cloud Pub/Sub or similar streaming platforms like Kafka. You must be skilled in JSON schema design, data serialization, and handling structured data formats. As an experienced individual, you should be capable of designing BigQuery views optimized for performance, scalability, and ease of consumption. Your responsibilities will include enhancing and maintaining Java-based adapters to publish transactional data from the Optimus system to Google Pub/Sub. Implementing and managing JSON schemas for smooth and accurate data ingestion into BigQuery will also be part of your role. Collaboration with cross-functional teams is essential to ensure that data models are structured to support high-performance queries and business usability. Strong communication and teamwork skills are required, along with the ability to align technical solutions with stakeholder requirements. Additionally, you will contribute to continuous improvements in data architecture and integration practices. The job location is Hyderabad/Bangalore.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As an experienced Analyst with over 6 years of experience, you will have the opportunity to work both independently and collaboratively within a large team of analysts and across various functions with external engineering and product stakeholders. Your primary responsibilities will revolve around pulling data from datasets using SQL, applying transformations, and conducting data analysis to tackle business challenges. Proficiency in top decile SQL skills, particularly in BigQuery, is essential. Additionally, you will utilize tools such as Tableau and PowerBI to create intuitive dashboards. In this role, you will need to thrive in ambiguity and adapt quickly to a fast-paced environment, showcasing strong organizational and coordination skills. As a curious self-starter, you should not fear failure when exploring new datasets and running tests to comprehend existing data structures and infrastructure, even in the absence of comprehensive documentation or guidance. Your responsibilities will also entail conducting root cause analysis, developing structured solutions while considering constraints, and translating product/business requirements into technical data requirements. Moreover, you will be tasked with composing SQL scripts, creating datamarts/data warehouses, building data pipelines, and generating dashboards and reports to provide valuable insights into business data. Effective communication is key in this role, as you will be required to aggregate, organize, and visualize data to convey information clearly. You must possess strong verbal and written English communication skills to interact cross-functionally with various team members, including product analysts, data scientists, engineers, program managers, and operations managers. Furthermore, your problem-solving abilities and quantitative support skills will be crucial in thinking innovatively to drive creative solutions. You will also be expected to debug and optimize existing code while identifying opportunities for enhancements to streamline data infrastructure maintenance efforts.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We have a job opportunity for a GCP Devops Cloud infra support professional with 5-8 years of experience in application support. The role involves working on BFSI projects and requires expertise in GCP, Terraform, Bigquery, and Kubernetes. Proficiency in APM monitoring tools, ticketing tools like Snow and JIRA, SQL, and exposure to Aerospike DB is preferred. ITIL skills, especially in Major Incident, Problem, and Change Management, are essential for this role. The candidate should be willing to operate on a shift basis. If you are interested in this position, please send your resume to monika.dorthy@upgrad.com. Only immediate joiners are requested to apply. We will be conducting face-to-face interviews this Saturday at our Bangalore office location. Kindly confirm your availability for the interview by replying with "y" or "n". Looking forward to meeting you and discussing this exciting opportunity further.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have proficiency in GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, and BigQuery. As a Data Modeller, you will be responsible for hands-on data modelling for OLTP and OLAP systems. Your role will require an in-depth understanding of Conceptual, Logical, and Physical data modelling. It is essential to have a strong grasp of Indexing, partitioning, and data sharding, supported by practical experience in these areas. Moreover, you must possess a solid understanding of the variables that impact database performance for near-real-time reporting and application interaction. Experience with at least one data modelling tool, preferably DBSchema, is necessary. Individuals with functional knowledge of the mutual fund industry will be preferred for this role. Additionally, a good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery would be beneficial for your responsibilities.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Modeling Engineer specializing in Near Real-time Reporting, you will be responsible for creating robust and optimized schemas to facilitate near real-time data flows for operational and analytical purposes within Google Cloud environments. Your primary focus will be on designing models that ensure agility, speed, and scalability to support high-throughput, low-latency data access needs. Your key responsibilities will include designing data models that align with streaming pipelines, developing logical and physical models tailored for near real-time reporting, implementing strategies such as caching, indexing, and materialized views to enhance performance, and ensuring data integrity, consistency, and schema quality during rapid changes. To excel in this role, you must possess experience in building data models for real-time or near real-time reporting systems, hands-on expertise with GCP platforms such as BigQuery, CloudSQL, and AlloyDB, and a solid understanding of pub/sub, streaming ingestion frameworks, and event-driven design. Additionally, proficiency in indexing strategies and adapting schemas in high-velocity environments is crucial. Preferred skills for this position include exposure to monitoring, alerting, and observability tools, as well as functional familiarity with financial reporting workflows. Moreover, soft skills like proactive adaptability in fast-paced data environments, effective verbal and written communication, and a collaborative, solution-focused mindset will be highly valued. By joining our team, you will have the opportunity to design the foundational schema for mission-critical real-time systems, contribute to the performance and reliability of enterprise data workflows, and be part of a dynamic GCP-focused engineering team. Skills required for this role include streaming ingestion frameworks, BigQuery, reporting, modeling, AlloyDB, pub/sub, CloudSQL, Google Cloud Platform (GCP), data management, real-time reporting, indexing strategies, and event-driven design.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough