Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
1 - 1 Lacs
Hyderabad
Work from Office
Job title: Data Scientist/AI Engineer About Quantco: At Quantaco, we deliver state-of-the-art predictive financial data services for the Australian hospitality industry. We are the eighth-fastest growing company in Australia as judged by the countrys flagship financial newspaper, The Australian Financial Review. We are continuing our accelerating through hyper-automation. Our engineers are thought leaders in the business and provide significant input into the design and direction of our technology. Our engineering roles are not singular in their focus. You will develop new data models and predictive models, ensure pipelines are fully automated and run with bullet-proof reliability. We are a friendly and collaborative team. We work using a mature, design-first development process focused on delivering new features to enhance our customer's experience and improve their bottom line. You'll always be learning at Quantaco. About the role We are looking for a Data Scientist with strong software engineering capabilities to join our growing team. This is a key role in helping us unlock the power of data across our platform and deliver valuable insights to hospitality businesses. You will work on projects ranging from statistical modelling and anomaly detection to productionizing ML pipelines (ranged from time series forecasting to neural networks and custom LLMs), integrating with Django and Flask-based web applications, and building data products on Google Cloud using PostgreSQL and BigQuery, as well as ML routines in Databricks/VertexAI. This role is ideal for someone who thrives in a cross-functional environment, enjoys solving real-world problems with data, and can contribute to production-grade systems. Position Description Data Scientist Our culture and values Quantaco is a happy and diverse group of professionals who value a strong work ethic, authenticity, creativity, and flexibility. We work hard for each other and for our customers while having fun along the way. You can see what our team says about life at Quantaco here. If you've got a passion for creating new and impactful data-driven technology and want to realise your potential in a team that values your ideas, then we want to hear from you. Responsibilities of the role: Build and deploy data-driven solutions and machine learning models into production. Collaborate with engineers to integrate models into Django/Flask applications and APIs. Develop and maintain data pipelines using Python and SQL. Proactively seek to link analytical outputs to commercial outcomes Provide technical expertise for proof-of-concept (PoC) and minimum viable product (MVP) phases Clean, transform, and analyse large datasets to extract meaningful insights. Write clean, maintainable Python code and contribute to the platform’s architecture. Work with cloud-native tools (Google Cloud, BigQuery, Cloud Functions, etc.). Participate in sprint planning, stand-ups, and team ceremonies as part of an Agile team. Document MLOps processes, workflows, and best practices to facilitate knowledge sharing and ensure reproducibility You’ll fit right in if you Have 3+ years of experience in a data-centric or backend software engineering role. Are proficient in production Python, including Django or Flask, and SQL (PostgreSQL preferred). Are curious, analytical, and love solving data problems end-to-end. You demonstrate a scientific and design-led approach to delivering effective data solutions Have experience with data modelling, feature engineering, and applying ML algorithms in real-world applications. Can develop scalable data pipelines and integrate them with cloud platforms (preferably Google Cloud). Communicate clearly and can collaborate across technical and non-technical teams. You are self-motivated and can work as an individual and in a team You love innovation and are always looking for ways to improve Have an MLOps experience (mainly, regarding time-series forecasting, LLM and text analysis, classification & clustering problems) Position Description – Data Scientist: It would be fantastic (but not essential) if you Hold a degree in data science, mathematics, statistics, or computer science. Have experience with BigQuery, VertexAI, DBT, Databricks, or Terraform. Are familiar with containerisation and serverless architecture (Docker/Kubernetes/GCP). Have worked with BI tools or data visualization frameworks (e.g. Looker, PowerBI). Have exposure to financial data systems or the hospitality industry. Preferred technical skill set: Google Cloud Platform (BigQuery, VertexAI, Cloud Run) Python (Django/Flask) Azure (MS SQL Server, Databricks) Postman (API development) DBT, Stored procedures ML (time-series forecasting, LLM, text analysis, classification) Tableau/Looker Studio/Power BI
Posted 1 month ago
2.0 - 7.0 years
4 - 7 Lacs
Thiruvananthapuram
Work from Office
Service Manager Role & responsibilities: - Primary responsibility will be to provide commercial and administrative support to service department. - To plan and attain sales target of service department. - Handling service department operations including Day to Day work allocation & planning of service engineers / technicians for job execution at site. - Cordinating service activities with inhouse technicians and sub contractors for territory of West India and key accounts - Spare part management and accountability. - To monitor billings and receivables. - Collection of payments and necessary tax forms. - Prepare quotations for service contracts and casual service jobs - Prepare, Maintain and update regularly the AMC data base and ensure renewal of contracts. - Managing AMC accounts of customers. - Prepare and Coordinate service schedules - Prepare and update list of Warranty units and AMC units - Visit key customers for meetings as and when required. - Depute technicians to sites for service works and approve their expense vouchers. - Monitoring service report files and database. - Organisnig training programs for on field service technicians and sub contractors including customer maintenance personnels from time to time. - Supervising & monitoring customer complaint register to ensure no escalation of open calls - Involving technical resource to analyse root cause of repeated complaints or failure and maintain reports of such findings and analyses for future reference. - Checking of quality of work, measurement certification, billing certification & collection of payments. - Material Planning delivery arrangement for Project/Service - Checking of service reports and inform material requirement. - Closing the complaints as per specified response time resolution and time - Monitoring of movements of Service people. - Co-ordination with Vendors/Contractors. - Support the Sales, Project & Service function in all aspects required from time to time Apply Save Save Pro Insights
Posted 1 month ago
6.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Role Senior Data Analyst. Experience 6 to 10 years. Location Bangalore, Pune, Hyderabad, Gurgaon, Noida. Notice Immediate joiners only. Data Analyst EDA Exploratory Data Analysis, Communication ,Strong hands-on SQL ,Documentation Exp, GCP Exp, Data pipeline Exp. Requirements - 8+ years experience in Data mining working with large relational databases, succession using advanced data extraction and manipulation tools (for example; Big Query, Teradata, etc.) working with both structured and unstructured data. - Excellent communication skills, both written and verbal able to explain solutions, problems in clear and concise manner. - Experience in conducting business analysis to capture requirements from non-technical partners. - Superb analytical and conceptual thinking skills; to not only to manipulate but also derive relevant interpretations from data. - Proven knowledge of the data management lifecycle, including experience with data quality and metadata management. - Hands on experience in Computer Science, Statistics, Mathematics or Information Systems. - Experience in cloud, GCP Bigquery including but not limited to complex SQL querying. - 1-2 years or experience/exposure in the following : 1. Experience with CI/CD release processes using gitlab,Jira, confluence. 2. Familiarity with creating yaml files, understanding unstructured data such as json. 3. Experience with Looker Studio, Dataplex is a plus. - Hands on engineering experience is an asset. - Exposure to Python, Java nice to have.
Posted 1 month ago
3.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management
Posted 1 month ago
8.0 - 10.0 years
12 - 22 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Consultant - Insurance Manual Functional Testing We are seeking a dynamic and versatile Principal Consultant with expertise in the insurance industry and a passion for playwriting. This dual-role position offers the unique opportunity to lead strategic consulting projects in the insurance sector while also contributing creatively through playwriting. The ideal candidate will bring deep industry knowledge and creative storytelling skills to both areas, driving innovation and engagement within the company and the broader community Responsibilities Working knowledge of Manual Functional Testing Experience in Test Rail, JIRA & Confluence Experience in insurance P and C domain. Working Knowledge driven development (BDD”), Dev Ops concepts Strong understanding and experience with Agile Framework (design, terminology, ceremonies, construct, etc.) Working knowledge of Agile and Test Management Tools (i.e., JIRA, X-Ray, etc.) Understanding several types of automation frameworks, Behaviour Driven, Data Driven etc. General Testing Skills that include manual testing, functional test case preparation, test data preparation, test environment setup etc. Effective communication skills & highly proactive in approach Ability to manage & prioritize deliverables. Ability to be learn and apply new processes and tools. Qualifications we seek in you! Minimum Qualifications / Skills BE/ B Tech/ MCA/M Tech Valid and relevant Years of Testing Experience Preferred Qualifications/ Skills Strong Specialty Insurance domain & IT knowledge Manual Functional Testing with good integration. Automation Testing. (Playwright preferable) Datawarehouse Testing experience. (Mandatory) Should be able to test Web Apps, Desktop Apps, API Services, DB, Data validations. Experience on JIRA/ Remedy tool Iterative / Agile / DevOps/ ITIL practices & tools Execution of Transformation, Integration & Automation Programs/ Projects Excellent verbal, written communication skills and Analytical reasoning ability. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at www.genpact.com and on X, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 month ago
8.0 - 12.0 years
12 - 13 Lacs
Hyderabad
Work from Office
We are seeking a Technical Architect specializing in Healthcare Data Analytics with expertise in Google Cloud Platform (GCP). The role involves designing and implementing data solutions tailored to healthcare analytics requirements. The ideal candidate will have experience in GCP tools like BigQuery, Dataflow, Dataprep, and Healthcare APIs, and should stay up to date with GCP updates. Knowledge of healthcare data standards, compliance requirements (e.g., HIPAA), and healthcare interoperability is essential. The role requires experience in microservices, containerization (Docker, Kubernetes), and programming languages like Python and Spark. The candidate will lead the implementation of data analytics solutions and collaborate with cross-functional teams, data scientists, and engineers to deliver secure, scalable systems.
Posted 1 month ago
12.0 - 17.0 years
30 - 45 Lacs
Bengaluru
Hybrid
Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have: Proven expertise in supply chain analytics across domains such as demand forecasting, inventory optimization, logistics, segmentation, and network design Well versed and hands-on experience of working on optimization methods like linear programming, mixed integer programming, scheduling optimization. Having understanding of working on third party optimization solvers like Gurobi will be an added advantage Proficiency in forecasting techniques (e.g., Holt-Winters, ARIMA, ARIMAX, SARIMA, SARIMAX, FBProphet, NBeats) and machine learning techniques (supervised and unsupervised) Strong command of statistical modeling, testing, and inference Proficient in using GCP tools: BigQuery, Vertex AI, Dataflow, Looker Building data pipelines and models for forecasting, optimization, and scenario planning Strong SQL and Python programming skills; experience deploying models in GCP environment Knowledge of orchestration tools like Cloud Composer (Airflow) Nice to have: Familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools (e.g., Cloud composer) Strong communication and stakeholder engagement skills at the executive level Roles and Responsibilities: Assist analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Role & responsibilities Preferred candidate profile
Posted 1 month ago
5.0 - 10.0 years
7 - 14 Lacs
Gurugram, Bengaluru
Work from Office
Required Skills • 5+ years of hands-on experience in relevant field • Knowledge of platforms like Big Query • Experience with AWS, Azure, or Google Cloud for scalable data solutions • Should be skilled in Azure SQL Managed Services • Skilled in utilizing Incortas direct data mapping for optimized data ingestion • Expertise in Extract, Transform, Load (ETL) processes and workflow automation • AI-enhanced automation, AMS, Oracle AMS ,Salesforce exposure are added skills Role & responsibilities • Build and optimize data pipelines for ingestion, transformation, and storage using BigQuery, Azure SQL Managed Services, and Incorta. • Implement Al-driven automation for data pipeline monitoring, performance tuning, and anomaly detection. • Ensure data governance, security, and compliance standards are met across all platforms. • Optimize data workflows for cost efficiency and scalability. • Collaborate with BI, Al, and application teams for seamless data access and analytics. • Integrate data from multiple sources including Salesforce, Oracle AMS, and other enterprise applications.
Posted 1 month ago
5.0 - 6.0 years
12 - 14 Lacs
Gurugram, Bengaluru
Work from Office
GCP Data Engineer Location: Gurgaon / Bengaluru Experience: 5+ years Job Type: Full-time Role Overview: The Data Engineer will focus on developing, maintaining, and optimizing data pipelines. You will work on BigQuery, Azure SQL Managed Services, and Incorta, ensuring efficient data ingestion, transformation, and governance while integrating Al-driven automation. Key Responsibilities: Build and optimize data pipelines for ingestion, transformation, and storage using BigQuery, Azure SQL Managed Services, and Incorta. Implement Al-driven automation for data pipeline monitoring, performance tuning, and anomaly detection. Ensure data governance, security, and compliance standards are met across all platforms. Optimize data workflows for cost efficiency and scalability. Collaborate with BI, Al, and application teams for seamless data access and analytics. Integrate data from multiple sources including Salesforce, Oracle AMS, and other enterprise applications. Required Skills: Primary: BigQuery, ETL Development, Azure SQL Managed Services, Incorta Secondary: Data Pipeline Optimization, Cost Optimization, Pipeline Maintenance & Automation Additional: AMS, Al-enhanced automation, Salesforce exposure, Oracle AMS Regards, Team BGT bougaintechbgt@gmail.com 9560201779
Posted 1 month ago
4.0 - 9.0 years
8 - 18 Lacs
Hyderabad, Bengaluru
Work from Office
Role & responsibilities Job Description: We are looking for an independent contributor experienced in Data Engineering space. Primary responsibilities include implementation of large-scale data processing (Structural, Statistical etc.,) Pipelines, creates production inference pipelines, associated APIs and analytics that support/provide insights for data driven decision making. Designs and develops data models, APIs, and pipelines to handle analytical workloads, data sharing, and movement across multiple systems at various grains in a large-scale data processing environment. Designs and maintains data systems and data structures for optimal read/write performance. Implements machine learning or statistical/heuristic learning in data pipelines based on input from Data Scientists. Roles and Responsibilities: Work in data streaming, movement, data modelling and data pipeline development Develop pipelines and data model changes in support of rapidly emerging business and project requirements Develop code and maintain systems to support analytics Infrastructure & Data Lake Partner/Contribute to data analysis and machine learning pipelines Design data recovery processes, alternate pipelines to check data quality. Create and maintain continuous data quality evaluation processes Optimize performance of the analytics platform and develop self-healing workflows Be a part of a global team and collaborate and co-develop solutions Qualifying Criteria: Bachelors degree in computer science, information technology, or engineering 5+ Years of prior experience in Data Engineering and Databases Experience with code based ETL framework like Airflow/Prefect Experience with Google Big Query, Google Pub Sub, Google Dataflow Experience building data pipelines on AWS or GCP Experience developing data APIs and pipelines using Python Experience with databases like MySQL/Postgres Experience with intermediate Python programming Experience with advanced SQL (analytical queries) "" Preferred Qualifications: Experience with Visualization tools like Tableau/QlikView/Looker Experience with building Machine Learning pipelines. Mandatory Skills: Data Engineering, Python, Airflow, AWS/ Google Cloud / GCP, Data Streaming, Data Lake, Data Pipelines, Google, Bigquerry, ETL, Google Pub sub, Google Data Flow, Rest API, MySQL, Postgre, SQL Analytics
Posted 1 month ago
5.0 - 8.0 years
9 - 19 Lacs
Gurugram, Bengaluru
Work from Office
Hi, Greetings Of the day Hiring for an MNC for a Sr Data Engineer Profile: Sr Data Engineer Experience-4-10years Interview Mode-Virtual Mandatory Skills : Pyspark, Python, AWS(Glue,EC2,Redshift,Lambda) Python, Spark, Big Data, ETL, SQL, etl, Data Warehousing. Good to have: Data structures and algorithms. Responsibilities Bachelor's degree in Computer Science, Engineering, or a related field Proven experience as a Data Engineer or similar role Experience with Python, and big data technologies (Hadoop, Spark, Kafka, etc.) Experience with relational SQL and NoSQL databases Strong analytic skills related to working with unstructured datasets Strong project management and organizational skills Experience with AWS cloud services: EC2, Lambda(step function), RDS, Redshift Ability to work in a team environment Excellent written and verbal communication skills Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Interested candidates can share the resume on the mail id avanya@niftelresources.com or contact on 9219975840 .
Posted 1 month ago
13.0 - 20.0 years
0 Lacs
Pune
Work from Office
Data Platform Engineer:-Technical Skills:- . 7+ years of experience in software engineering, with a focus on Big Data and GCP technologies such as Hadoop, PySpark, Terraform, BigQuery, DataProc and data management. Proven experience in leading software engineering teams, with a focus on mentorship, guidance, and team growth. Strong expertise in designing and implementing data pipelines, including ETL processes and real-time data processing. Hands-on experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, Pig, and Spark. Hands on experience with cloud platform particularly Google Cloud Platform (GCP), and its data management services (e.g., Terraform, BigQuery, Cloud Dataflow, Cloud Dataproc, Cloud Storage). Solid understanding of data quality management and best practices for ensuring data integrity. Familiarity with containerization and orchestration tools such as Docker and Kubernetes is a plus. Excellent problem-solving skills and the ability to troubleshoot complex systems. Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Chennai, Tamil Nadu
Work from Office
Duration: 12Months Work Type: Onsite Position Description: We seeking an experienced GCP Data Engineer who can build cloud analytics platform to meet ever expanding business requirements with speed and quality using lean Agile practices. You will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the Google Cloud Platform (GCP). You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications to GCP. Experience with large scale solution and operationalization of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on Google Cloud Platform. Skills Required: Experience in working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Implement methods for automation of all parts of the pipeline to minimize labor in development and production Experience in analyzing complex data, organizing raw data and integrating massive datasets from multiple data sources to build subject areas and reusable data products Experience in working with architects to evaluate and productionalize appropriate GCP tools for data ingestion, integration, presentation, and reporting Experience in working with all stakeholders to formulate business problems as technical data requirement, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management Proficient in Machine Learning model architecture, data pipeline interaction and metrics interpretation. This includes designing and deploying a pipeline with automated data lineage. Identify, develop, evaluate and summarize Proof of Concepts to prove out solutions. Test and compare competing solutions and report out a point of view on the best solution. Integration between GCP Data Catalog and Informatica EDC. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Skills Preferred: Strong drive for results and ability to multi-task and work independently Self-starter with proven innovation skills Ability to communicate and work with cross-functional teams and all levels of management Demonstrated commitment to quality and project timing Demonstrated ability to document complex systems Experience in creating and executing detailed test plans Experience Required: 3 to 5 Yrs Education Required: BE or Equivalent
Posted 1 month ago
5.0 - 10.0 years
12 - 16 Lacs
Pune
Work from Office
COMPANY OVERVIEW Domos AI and Data Products Platform lets people channel AI and data into innovative uses that deliver a measurable impact. Anyone can use Domo to prepare, analyze, visualize, automate, and build data products that are amplified by AI. POSITION SUMMARY Working as a member of Domo s Client Services team, the Senior Technical Consultant will be focused on the implementation of fault tolerant, highly scalable solutions. The successful candidate will have a minimum of 5 years working hands-on with data. This individual will join an enthusiastic, fast-paced and dynamic team at Domo. A successful candidate will have demonstrated sustained exceptional performance, innovation, creativity, insight, good judgment. KEY RESPONSIBILITIES Partner with customers, business users, technical teams to understand the data needs and deliver impactful solutions; Develop strategies for data acquisitions and integration of the new data into Domos Data Engine; Map source system data to Domos data architecture and define integration strategies; Lead database analysis, design, and build effort, if required; Design scalable and efficient data models for the data warehouse or data mart (data structure, storage, and integration); Implement best practices for data ingestion, transformation and semantic modelling; Aggregate, transform and prepare large data sets for use within Domo solutions; Provide guidance on how to design and optimizes complex SQL queries; Provide consultation and mentoring to customers on best practices and skills to drive greater self-sufficiency; Ensure data quality and perform validation across pipelines and reports; Write Python scripts to automate governance processes; Ability to create workflows in DOMO to automate business processes; Build custom Domo applications or custom bricks to support unique client use cases; Develop Agent Catalysts to deliver generative AI-powered insights within Domo, enabling intelligent data exploration, narrative generation, and proactive decision support through embedded AI features; Capable of thoroughly reviewing and documenting existing data pipelines, and guiding customers through them to ensure a seamless transition and operational understanding. JOB REQUIREMENTS 5+ years of experience supporting business intelligence systems in a BI or ETL Developer role; Expert SQL skills required; Expertise with Windows and Linux environments; Expertise with at least one of the following database technologies and familiarity with the others: relational, columnar and NoSQL (i.e. MySQL, Oracle, MSSQL, Vertica, MongoDB); Understanding of data modelling skills (i.e. conceptual, logical and physical model design - with both traditional 3rd normal form as well as dimensional modelling, such as star and snowflake); Experience dealing with large data sets; Goal oriented with strong attention to detail; Proven experience in effectively partnering with business teams to deliver their goals and outcomes; Bachelors Degree in in Information Systems, Statistics, Computer Science or related field preferred OR equivalent professional experience; Excellent problem-solving skills and creativity; Ability to think outside the box; Ability to learn and adapt quickly to varied requirements; Thrive in a fast-paced environment. NICE TO HAVE Experience working with APIs; Experience working with Web Technologies (JavaScript, Html, CSS); Experience with scripting technologies (Java, Python, R, etc.); Experience working with Snowflake, Data Bricks or Big Query is a plus; Experience defining scope and requirements for projects; Excellent oral and written communication skills, and comfort presenting to everyone from entry-level employees to senior vice presidents; Experience with statistical methodologies; Experience with a wide variety of business data (Marketing, Finance, Operations, etc); Experience with Large ERP systems (SAP, Oracle JD Edwards, Microsoft Dynamics, NetSuite, etc); Understanding of Data Science, Data Modelling and analytics. LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical insurance provided Maternity and paternity leave policies Baby bucks: a cash allowance to spend on anything for every newborn or child adopted Haute Mama : cash allowance for maternity wardrobe benefit (only for women employees) Annual leave of 18 days + 10 holidays + 12 sick leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit: cash allowance towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Marriage leaves up to 3 days Bereavement leaves up to 5 days Domo is an equal opportunity employer. #LI-PD1 #LI-Onsite
Posted 1 month ago
5.0 - 8.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Job TitleGoogle Workspace Automation Specialist LocationBangalore, Hyderabad Experience6+ Year CBR- 240K : We are looking for a Google Workspace automation Specialist who can efficiently automate workflows using Google Apps Script, and develop data-driven solutions with Looker Studio and BigQuery. The ideal candidate will have expertise in cloud-based automation, dashboard development, and data analytics using GCP services. Key Responsibilities: Google Apps Script Automation Develop scripts to automate Google Workspace tasks (Gmail, Drive, Sheets) Admin Console). Integrate APIs and third-party applications to streamline workflows. Troubleshoot and optimize existing scripts for efficiency. Google AppSheet Development: Build and maintain low-code business applications using AppSheet. Design workflows and automation for data collection and processing. Ensure seamless integration with Google Sheets, BigQuery, and other databases. Looker Studio & BigQuery Analytics: Design and develop dashboards for business intelligence and reporting. Write optimized SQL queries to extract and transform data in BigQuery. Connect and visualize data from multiple sources in Looker Studio. Google Cloud Platform (GCP) Integration: Implement and manage GCP services related to data storage and processing. Set up ETL pipelines for structured and unstructured data. Ensure data security and compliance within the cloud environment. Required Skills & Qualifications: Proficiency in JavaScript and experience with automation scripts. Google Apps Script, Google AppSheet, and API integrations. Hands-on experience in Looker Studio for dashboard creation and data visualization. Expertise in SQL and BigQuery for data analysis and reporting. Familiarity with Google Cloud Platform (GCP), including IAM, Storage, and Compute. Ability to troubleshoot and resolve issues related to Google Workspace and cloud automation. Strong problem-solving and analytical skills with attention to detail. Preferred Qualifications: Google App Script or JavaScript developer Google Workspace Administrator or Google Cloud certifications. Experience with Python, REST APIs, or other scripting languages. Knowledge of security best practices in cloud and enterprise environments. Previous experience in automating business processes using AppSheet and Apps Script. Do Provide adequate support in architecture planning, migration & installation for new projects in own tower (platform/dbase/ middleware/ backup) Lead the structural/ architectural design of a platform/ middleware/ database/ back up etc. according to various system requirements to ensure a highly scalable and extensible solution Conduct technology capacity planning by reviewing the current and future requirements Utilize and leverage the new features of all underlying technologies to ensure smooth functioning of the installed databases and applications/ platforms, as applicable Strategize & implement disaster recovery plans and create and implement backup and recovery plans Manage the day-to-day operations of the tower Manage day-to-day operations by troubleshooting any issues, conducting root cause analysis (RCA) and developing fixes to avoid similar issues. Plan for and manage upgradations, migration, maintenance, backup, installation and configuration functions for own tower Review the technical performance of own tower and deploy ways to improve efficiency, fine tune performance and reduce performance challenges Develop shift roster for the team to ensure no disruption in the tower Create and update SOPs, Data Responsibility Matrices, operations manuals, daily test plans, data architecture guidance etc. Provide weekly status reports to the client leadership team, internal stakeholders on database activities w.r.t. progress, updates, status, and next steps Leverage technology to develop Service Improvement Plan (SIP) through automation and other initiatives for higher efficiency and effectiveness Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipro’s standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation Deliver NoPerformance ParameterMeasure1Operations of the towerSLA adherence Knowledge management CSAT/ Customer Experience Identification of risk issues and mitigation plans Knowledge management2New projectsTimely delivery Avoid unauthorised changes No formal escalations Mandatory Skills: Google Cloud Admin. Experience5-8 Years.
Posted 1 month ago
5.0 - 9.0 years
13 - 16 Lacs
Chennai
Work from Office
Key Responsibilities: Develop and maintain scalable full-stack applications using Java, Spring Boot, and Angular for building rich UI screens and custom/reusable components. Design and implement cloud-based solutions leveraging Google Cloud Platform (GCP) services such as BigQuery, Google Cloud Storage, Cloud Run, and PubSub. Manage and optimize CI/CD pipelines using Tekton to ensure smooth and efficient development workflows. Deploy and manage Google Cloud services using Terraform, ensuring infrastructure as code principles. Mentor and guide junior software engineers, fostering professional development and promoting systemic change across the development team. Collaborate with cross-functional teams to design, build, and maintain efficient, reusable, and reliable code. Drive best practices and improvements in software engineering processes, including coding standards, testing, and deployment strategies. Required Skills: Java/Spring Boot (5+ years): In-depth experience in developing backend services and APIs using Java and Spring Boot. Angular (3+ years): Proven ability to build rich, dynamic user interfaces and custom/reusable components using Angular. Google Cloud Platform (2+ years): Hands-on experience with GCP services like BigQuery, Google Cloud Storage, Cloud Run, and PubSub. CI/CD Pipelines (2+ years): Experience with tools like Tekton for automating build and deployment processes. Terraform (1-2 years): Experience in deploying and managing GCP services using Terraform. J2EE (5+ years): Strong experience in Java Enterprise Edition for building large-scale applications. Experience mentoring and delivering organizational change within a software development team.
Posted 1 month ago
4.0 - 7.0 years
8 - 14 Lacs
Noida
Hybrid
Data Engineer (L3) || GCP Certified Employment Type : Full-Time Work Mode : In-office/ Hybrid Notice : Immediate joiners As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development ""scrums"" and solution reviews, mentor junior Data Engineering Specialists, lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills : Design, develop, and support data pipelines and related data products and platforms. Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms. Perform application impact assessments, requirements reviews, and develop work estimates. Develop test strategies and site reliability engineering measures for data products and solutions. Participate in agile development ""scrums"" and solution reviews. Mentor junior Data Engineers. Lead the resolution of critical operations issues, including post-implementation reviews. Perform technical data stewardship tasks, including metadata management, security, and privacy by design. Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies Demonstrate SQL and database proficiency in various data engineering tasks. Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. Develop Unix scripts to support various data operations. Model data to support business intelligence and analytics initiatives. Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have). data pipelines, agile development,scrums, GCP Data Technologies, Python, DAGs, Control-M, Apache Airflow, Data solution architecture Qualifications : Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field. 4+ years of data engineering experience. 2 years of data solution architecture and design experience. GCP Certified Data Engineer (preferred). Job Type : Full-time
Posted 1 month ago
2.0 - 4.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Role Finance Controller Lead DO Lead cross global functional teams in developing finance strategies to support a strategic alignment with companys Business Operations, and Corporate departments on company goals & initiatives. Manage financial goals that result in strong customer satisfaction, align with company strategy, and optimize costs and supplier relations. Influence senior leaders in setting direction for their functional areas by linking finance and business strategies to optimize business results.
Posted 1 month ago
1.0 - 4.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Job Area: Information Technology Group, Information Technology Group > IT Data Engineer General Summary: We are looking for a savvy Data Engineer expert to join our analytics team. The Candidate will be responsible for expanding and optimizing our data and data pipelines, as well as optimizing data flow and collection for cross functional teams. The ideal candidate has python development experience and is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. We believe that candidate with solid Software Engineering/Development is a great fit. However, we also recognize that each candidate has a unique blend of skills. The Data Engineer will work with database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams. The right candidate will be excited by the prospect of optimizing data to support our next generation of products and data initiatives.Responsibilities for Data Engineer Create and maintain optimal data pipelines, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvementsautomating manual processes, optimizing data delivery, re-designing for greater scalability, etc. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Performing ad hoc analysis and report QA testing. Follow Agile/SCRUM development methodologies within Analytics projects. Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing big data data pipelines, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Good communication skills, a great team player and someone who has the hunger to learn newer ways of problem solving. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge on Unix or Shell scripting Constructing methods to test user acceptance and usage of data. Knowledge of predictive analytics tools and problem solving using statistical methods is a plus. Experience supporting and working with cross-functional teams in a dynamic environment. Demonstrated understanding of the Software Development Life Cycle Ability to work independently and with a team in a diverse, fast paced, and collaborative environment Excellent written and verbal communication skills A quick learner with the ability to handle development tasks with minimum or no supervision Ability to multitask We are looking for a candidate with 7+ years of experience in a Data Engineering role. They should also have experience using the following software/tools Experience in Python, Java, etc. Experience with Google Cloud Platform. Experience with bigdata frameworks & tools - Apache Hadoop/Beam/Spark/Kafka. Exposure to workflow management & scheduling using Airflow/Prefect/Dagster Exposure to databases like (Big Query , Clickhouse). Experience to container orchestration (Kubernetes) Optional Experience on one or more BI tools (Tableau, Splunk or equivalent).. Minimum Qualifications:6+ years of IT-related work experience without a Bachelors degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms.'Bachelor's degree and 7+ years Data Engineer/ Software Engineer (Data) Experience Minimum Qualifications: 4+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 6+ years of IT-related work experience without a Bachelors degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms. Bachelors / Masters or equivalent degree in computer engineering or in equivalent stream Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 1 month ago
0.0 - 3.0 years
5 - 9 Lacs
Mumbai
Work from Office
Job Title:Sub Regional Manager Experience0-3 Years Location:Mumbai : RESPONSIBILITES Management responsibilities The SRM will manage a team of 2 to 4 junior Trainer and Product Specialists. Responsibilities would include: Manage a team of 2 to 4 junior Trainer and Product Specialists, coordinating their activities, scheduling and verifying their visits, and ensuring timely and accurate reporting of activities. Mentoring newly recruited trainers, developing their product knowledge, and facilitating their growth as trainers. Making visits to not only schools directly handled by you, but periodic visits to schools handled by the trainers you manage. The purpose of these visits are varied and include quality control, relationship building, or ongoing training. School support responsibilities Deliver training on theprogram delivery and methodology to the teachers ofa select number of high-profile/significant schoolswhich have adopted the program. Conduct regular support visits to the assigned schools, monitor sessions, and provide feedback for improvement to the government or school management. Manage the schools delivery and effectiveness of program in the geography assigned and ensure positive feedback. Build relationship and maintain good rapport with the government department and functionaries. Other responsibilities Support the Sales team in making presentations to teachers, educators, and school decision makers to influence them to adopt path-breaking practices. Generate timely project reports and documents, ensuring effective communication between the company and the respective government or school partner(s). Coordinate activities such as impact assessment processes and other product related research. QUALIFICATIONS The ideal candidate would have A Masters in social work and/or strong background in teaching and education, experience in the social or development sector, or experience in soft-skills training. Excellent communication and presentation skills. Excellent data management skills. The SRM would need to effectively manage information for anywhere between 50 and 200 schools. Prior experience of managing a team (of any size). Strong fluency in English and a regional language. Experience in project management and coordination preferred. Due to the nature of the work, the applicant must be willing to travel extensively. The cost of travel and accommodation will be reimbursed as per company’s HR and Finance policy. KPEC is a fast growing enterprise and candidates who demonstrate passion and capabilities can expect substantial growth opportunities.
Posted 1 month ago
5.0 - 10.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Job Title:Data Engineer Experience5-10 Years Location:Bangalore : Data Engineers with PySpark and AWS Glue experiences. AWS mandatory. GCP and Azure add-on Proven experience as a Data Engineer or similar role in data architecture, database management, and cloud technologies. Proficiency in programming languages such as Python, Java, or Scala. Strong experience with data processing frameworks like PYSpark, Apache Kafka, or Hadoop. Hands-on experience with data warehousing solutions such as Redshift, BigQuery, Snowflake, or similar platforms. Strong knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL, etc.). Experience with version control tools like Git. Familiarity with containerization and orchestration tools like Docker, Kubernetes, and Airflow is a plus. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills and ability to collaborate with cross-functional teams. Certifications Needed Bachelor's or master’s degree in Computer Science, Information Systems, Engineering or equivalent.
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Chennai
Hybrid
Duration: 8Months Work Type: Onsite Position Description: Looking for qualified Data Scientists who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, and Optimization. Potential candidates should have hands-on experience in applying first principles methods, machine learning, data mining, and text mining techniques to build analytics prototypes that work on massive datasets. Candidates should have experience in manipulating both structured and unstructured data in various formats, sizes, and storage-mechanisms. Candidates should have excellent problem-solving skills with an inquisitive mind to challenge existing practices. Candidates should have exposure to multiple programming languages and analytical tools and be flexible to using the requisite tools/languages for the problem at-hand. Skills Required: Machine Learning, GenAI, LLM Skills Preferred: Python, Google Cloud Platform, Big Query Experience Required: 3+ years of hands-on experience in using machine learning/text mining tools and techniques such as Clustering/classification/decision trees, Random forests, Support vector machines, Deep Learning, Neural networks, Reinforcement learning, and other numerical algorithms Experience Preferred: 3+ years of experience in at least one of the following languages: Python, R, MATLAB, SAS Experience with GoogleCloud Platform (GCP) including VertexAI, BigQuery, DBT, NoSQL database and Hadoop Ecosystem Education Required: Bachelor's Degree
Posted 1 month ago
3.0 - 5.0 years
10 - 14 Lacs
Hyderabad
Work from Office
You will be responsible for the uptime, performance, and operational cost of a large-scale cloud platform or some of our SaaS-based products. You will make daily and weekly operational decisions with the goal of improving uptime while reducing costs. You will drive improvements by gaining an in-depth knowledge of the products in your responsibility and applying the latest emerging trends in the cloud and SaaS technologies. All your decisions will be focused on providing the best in class service to the users of our SaaS products. Our organization relies on its central engineering workforce to develop and maintain a product portfolio of several different startups. As part of our engineering, you'll get to work on several different products every quarter. Our product portfolio continuously grows as we incubate more startups, which means that different products are very likely to make use of different technologies, architecture & frameworks - a fun place for smart tech lovers! Candidate Requirements 3 to 5 years of experience working in DevOps. In-depth knowledge of configuring and hosting services on Kubernetes. Hands-on experience in configuring and managing a service mesh like Istio. Experience working in production environment, AWS, Cloud, Agile, CI/CD, and DevOps environments. We live in the Cloud Experience in Jenkins, Google Cloud Build, or similar Good to have experience with using PAAS and SAAS services from AWS/Azure/GCP like BigQuery,Cloud Storage, S3, etc. Good to have experience with configuring, scaling, and monitoring database systems li PostgreSQL, MySQL, MongoDB, and so on.
Posted 1 month ago
4.0 - 7.0 years
10 - 14 Lacs
Noida
Work from Office
Location: Noida (In-office/Hybrid; client site if required) Type: Full-Time | Immediate Joiners Preferred Must-Have Skills: GCP (BigQuery, Dataflow, Dataproc, Cloud Storage) PySpark / Spark Distributed computing expertise Apache Iceberg (preferred), Hudi, or Delta Lake Role Overview: Be part of a high-impact Data Engineering team focused on building scalable, cloud-native data pipelines. You'll support and enhance EMR platforms using DevOps principles, helping deliver real-time health alerts and diagnostics for platform performance. Key Responsibilities: Provide data engineering support to EMR platforms Design and implement cloud-native, automated data solutions Collaborate with internal teams to deliver scalable systems Continuously improve infrastructure reliability and observability Technical Environment: Databases: Oracle, MySQL, MSSQL, MongoDB Distributed Engines: Spark/PySpark, Presto, Flink/Beam Cloud Infra: GCP (preferred), AWS (nice-to-have), Terraform Big Data Formats: Iceberg, Hudi, Delta Tools: SQL, Data Modeling, Palantir Foundry, Jenkins, Confluence Bonus: Stats/math tools (NumPy, PyMC3), Linux scripting Ideal for engineers with cloud-native, real-time data platform experience especially those who have worked with EMR and modern lakehouse stacks.
Posted 1 month ago
12.0 - 16.0 years
18 - 25 Lacs
Hyderabad
Remote
JD for Fullstack Developer. Exp: 10+yrs Front End Development Design and implement intuitive and responsive user interfaces using React.js or similar front-end technologies. Collaborate with stakeholders to create a seamless user experience. Create mockups and UI prototypes for quick turnaround using Figma, Canva, or similar tools. Strong proficiency in HTML, CSS, JavaScript, and React.js. Experience with styling and graph libraries such as Highcharts, Material UI, and Tailwind CSS. Solid understanding of React fundamentals, including Routing, Virtual DOM, and Higher-Order Components (HOC). Knowledge of REST API integration. Understanding of Node.js is a big advantage. Middleware Development Experience with REST API development, preferably using FastAPI. Proficiency in programming languages like Python. Integrate APIs and services between front-end and back-end systems. Experience with Docker and containerized applications. Back End Development Experience with orchestration tools such as Apache Airflow or similar. Design, develop, and manage simple data pipelines using Databricks, PySpark, and Google BigQuery. Medium-level expertise in SQL. Basic understanding of authentication methods such as JWT and OAuth. Bonus Skills Experience working with cloud platforms such as AWS, GCP, or Azure. Familiarity with Google BigQuery and Google APIs. Hands-on experience with Kubernetes for container orchestration. Contact : Sandeep Nunna Ph No : 9493883212 Email : sandeep.nunna@clonutsolutions,com
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane