Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
4 - 9 Lacs
hyderabad
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Responsibilities Design, develop, and maintain scalable data pipelines using Azure data services such as Azure Data Factory and Apache Spark. Implement efficient Extract, Transform, Load (ETL) processes to move and transform data across various sources. Design, develop, and maintain data solutions using Azure Synapse Analytics. Implement data ingestion, transformation, and extraction processes using Azure Synapse Pipelines. Knowledge about data warehousing concepts Utilize Azure SQL Database, Azure Blob Storage, Azure Data Lake Storage, and other Azure data services to store and retrieve data. Performance optimization and troubleshooting capabilities Advanced SQL knowledge, capable to write optimized queries for faster data workflows. Proven work experience in Spark, Python, SQL, Any RDBMS. Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as highscale or distributed RDBMS Must be extremely well versed with handling large volume data and work using different tools to derive the required solution. Mandatory skill sets Handson experience in ADF, or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problemsolving and communication skills Preferred skill sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Databricks will be a plus Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years of experience required 4 7 yrs Education qualification Btech/MBA/MCA Education Degrees/Field of Study required Bachelor of Technology, MBA (Master of Business Administration) Degrees/Field of Study preferred Required Skills Azure Data Lake, Data Warehouse Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} No
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
gurugram
Work from Office
. . . We are seeking a seasoned and visionary Product Manager to lead the strategy and execution for our core technical platform and data infrastructure. This is a critical, high-impact role responsible for building the foundational services that empower our engineering and product teams to innovate and deliver value to our customers at scale. You will be the voice of our internal developers and platform consumers, defining the roadmap for everything from our IDP microservices architecture to our data pipelines and analytics platforms. Job Function & Responsibilities As the Product Manager for Tech Platform & Data, you will own the "product" that our engineers and business verticals use every day. Your primary goal is to enhance data processing, developer velocity, system reliability, and data accessibility across the organization. Vision & Strategy: Define and articulate a clear, long-term vision and strategy for our technical platform and data products. Create and manage a detailed product roadmap that aligns with company objectives and addresses the needs of your internal customers across the company. Technical Leadership: Partner closely with senior engineering leaders, architects, and data scientist / ML engineers to make critical architectural and technical design decisions. Execution & Prioritization: Translate complex technical requirements into clear product requirement documents (PRDs), epics, and user stories. Ruthlessly prioritize features, enhancements, and technical debt using data-driven frameworks. Stakeholder Management: Act as the central point of contact between platform engineering teams and their internal customers (e.g., application developers, data analysts, other product managers). Communicate roadmaps, manage dependencies, and ensure alignment across teams. Data as a Product: Champion the "Data as a Product" mindset. Oversee the entire lifecycle of our data assets, including data ingestion, capture/ extraction, transformation, storage, governance, quality, and accessibility for analytics and machine learning. Metrics & Measurement: Define and analyze key performance indicators (KPIs) to measure the health and success of the platform, such as system uptime, developer deployment frequency, API latency, and data quality scores. Required Skills & Experience 4+ years of product management or engineering experience , with 3+ years focused specifically on technical, infrastructure, or data platform products. Deep Technical Acumen: comfortable diving deep into technical discussions. A strong understanding of cloud infrastructure (AWS, GCP, or Azure), microservices architecture, APIs, containerization (Docker, Kubernetes), and CI/CD pipelines is essential. IDP a plus Data Expertise: solid experience with modern data stacks, including data warehousing (e.g., Snowflake, BigQuery), data pipelines (e.g., Airflow, dbt), and business intelligence tools. Proficiency in SQL is a must. Exceptional Communication: ability to translate complex technical concepts into clear, concise language for both technical and non-technical audiences. Proven Leadership: demonstrated ability to lead and influence cross-functional teams without direct authority. Experience mentoring other product managers is a strong plus. Educational Background: Bachelors degree in Computer Science, Engineering, or a related field, or equivalent hands-on experience as a software engineer or technical architect is highly preferred. Fintech and Enterprise Startup experience preferred. A Typical Day Might Include Leading a backlog grooming session with your core platform engineering team. Writing a PRD for a new agentic chat logging service. Collaborating with an application PM to understand their future platform needs. Analyzing platform usage metrics in a data dashboard to identify adoption trends. Presenting your quarterly roadmap to executive leadership. Deep-diving with a site reliability engineer (SRE) to triage a production incident and identify long-term solutions. . .
Posted 2 weeks ago
10.0 - 15.0 years
40 - 45 Lacs
bengaluru
Work from Office
Sanas is revolutionizing the way we communicate with the world s first real-time algorithm, designed to modulate accents, eliminate background noises, and magnify speech clarity. Pioneered by seasoned startup founders with a proven track record of creating and steering multiple unicorn companies, our groundbreaking GDP-shifting technology sets a gold standard. Sanas is a 200-strong team, established in 2020. In this short span, we ve successfully secured over $100 million in funding. Our innovation have been supported by the industry s leading investors, including Insight Partners, Google Ventures, Quadrille Capital, General Catalyst, Quiet Capital, and other influential investors. Our reputation is further solidified by collaborations with numerous Fortune 100 companies. With Sanas, you re not just adopting a product; you re investing in the future of communication. We re looking for an experienced and forward-thinking Principal Data Engineer to lead the design and implementation of our end-to-end data infrastructure for industry leading Voice AI products. This is a high impact role where you will shape the technical vision, own strategic architecture decisions, and mentor a growing team of Data engineers focused on delivering reliable and scalable data systems for Machine Learning at scale. You ll work cross-functionally with AI research scientists, Infrastructure and product teams to ensure that data - from raw audio to training-ready features - is consistently accessible, compliant and optimized for speed and scale. You ll help push the boundaries of real-time Voice AI! Key Responsibilities : Architect and lead the development of large scale data pipelines and data lakes to ingest, transform and serve high quality data for AI model training, product telemetry and analytics. Drive long term data infrastructure strategy across streaming and batch, feature store extensions, Iceberg/Delta lake choices, metadata management, and lakehouse evolution. Drive platform and infrastructure decisions, optimizing compute fleets (e.g. Ray, Spark clusters), orchestration tooling (Airflow, Dagster), and streaming stacks (Kafka, Flink). Collaborate with AI research scientists, engineering leads, product, finance,marketing, and legal to align data architecture with business and regulatory requirements. Advocate best practices in data governance, lineage, observability, testing, tooling, and disaster recovery across pipelines and data stores. Act as a mentor and technical leader - review design and code, share patterns, elevate team capability, and support recruitment and hiring. Drive build vs buy decisions for tools to implement data quality and observability solutions to achieve high data quality Qualifications: 10+ years of experience in Data Engineering, Infrastructure, or ML Systems , with at least 2+years in a technical leadership capacity Expertise in building distributed batch and real-time data systems Expertise in Databases (like Postgres) andData Lakes (like Snowflake,Databricks and ClickHouse) Experience using Data Processing frameworks like Spark, Flink and Ray Deep Experience with cloud platforms (AWS/GCP), object storage (e.g., S3),and orchestrators like Airflow and Dagster Strong knowledge of data lifecycle management, including privacy, security, compliance and reproducibility. Comfortable working in a fast-paced startup environment Strategic mindset and proven ability to collaborate across engineering, ML and product teams to deliver infrastructure that scales with the business Nice to Have: Familiarity with audio data and its unique challenges, like large file sizes, time- series features, metadata handling, is a strong plus. Experience with Voice AI models like ASR, TTS and speaker verification. Familiarity with real-time data processing frameworks like Kafka, Flink, Druid and Pinot Familiarity with ML workflows including: MLOps, feature engineering, model training and inference. Experience with labeling tools, audio annotation platforms, or human-in-the-loop annotation pipelines. Joining us means contributing to the world s first real-time speech understanding platform revolutionizing Contact Centers and Enterprises alike. Our technology empowers agents, transforms customer experiences, and drives measurable growth. But this is just the beginning. Youll be part of a team exploring the vast potential of an increasingly sonic future
Posted 2 weeks ago
8.0 - 13.0 years
35 - 40 Lacs
bengaluru
Work from Office
Candescent is the largest non-core digital banking provider. We bring together the transformative technologies that power and connect account opening, digital banking and branch solutions for banks and credit unions of all sizes on any core. Our Candescent solutions power the top three U.S. mobile banking apps and are trusted by banks and credit unions of all sizes. We offer an extensive portfolio of industry-leading products and services with an extensible ecosystem of out-of-the-box and integrated partner solutions. In addition, our API-first architecture and developer tools enable financial institutions to optimize and expand upon their existing capabilities by seamlessly integrating custom-built or third-party solutions. And our connected in-person, remote and digital experiences reinvent customer service across all channels. Self-service configuration and marketing tools give financial institutions greater control of their branding, targeted messaging and overall user experience. And data-driven analytics and reporting tools provide valuable insights to help drive continued growth and profitability. From conversions and implementations to custom development and customer care, our clients get expert, end-to-end support at every step. Essential Duties and Responsibilities- Data Lake Organization : Structure data lake assets using medallion architecture following the Domain-drive and Source-driven approach. (10%) Data Pipeline Design and Development : Develop, deploy and orchestrate data pipelines using Data factory, pyspark / sql notebooks and ensure smooth data flow from various sources to storage and processing systems (20%) Design and Build Data Systems : Create and maintain Candescent s data systems, databases, and data warehouses to store and manage large volumes of data (20%) Data Compliance and Security : Ensure that data systems comply with security standards and regulations, protecting sensitive information. (10%) Collaboration : Work closely with Data Management team (Data architects, data analysts, and other stakeholders) to understand data needs and implement approved data solutions (20%) Troubleshooting and Optimization : Troubleshoot data-related issues and continuously optimize data systems for better performance. (15%) Requirements- Must have 8+ years of IT experience in implementing Design patterns for Data Systems Extensive experience in building API-based data pipelines using Azure ecosystem Ability to build, maintain and improve data architecture, data collection and data storage systems Ability to build and orchestrate end-to-end pipelines using Microsoft Fabric stack (ADF/ Dataflows) Proficiency in ETL/ELT technologies with focus in Microsoft Fabric stack (ADF, Spark, SQL) Proficiency in building data warehouse model (Dimensional Models) utilizing Azure Synapse and Azure Delta lakehouse Extensive programming experience of data processing languages (SQL/T-SQL, Python or Scala) Expertise in code management utilizing GitHub as primary repository Experience with DevOps practices, configuration frameworks and CI/CD automation tooling Collaborate with report developers/analysts and business teams to improve data models that feed BI tools and increase data accessibility. EEO Statement Integrated into our shared values is Candescent s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. Candescent is committed to being a globally inclusive company where all people are treated fairly, recognized for their individuality, promoted based on performance and encouraged to strive to reach their full potential. We believe in understanding and respecting differences among all people. Every individual at Candescent has an ongoing responsibility to respect and support a globally diverse environment. Statement to Third Party Agencies To ALL recruitment agencies: Candescent only accepts resumes from agencies on the preferred supplier list. Please do not forward resumes to our applicant tracking system, Candescent employees, or any Candescent facility. Candescent is not responsible for any fees or charges associated with unsolicited resumes.
Posted 2 weeks ago
0.0 - 5.0 years
2 - 7 Lacs
hyderabad
Work from Office
Job Description Project Overview: Media Mix Optimization (MMO) Our MMO platform is an in-house initiative designed to empower clients with data-driven decision-making in marketing strategy. By applying Bayesian and frequentist approaches to media mix modeling, we are able to quantify channel-level ROI, measure incrementality, and simulate outcomes under varying spend scenarios. Key components of the project include: Data Integration: Combining client first-party, third-party, and campaign-level data across digital, offline, and emerging channels into a unified modeling framework. Model Development: Building and validating media mix models (MMM) using advanced statistical and machine learning techniques such as hierarchical Bayesian regression, regularized regression (Ridge/Lasso), and time-series modeling. Scenario Simulation: Enabling stakeholders to forecast outcomes under different budget allocations through simulation and optimization algorithms. Deployment & Visualization: Using Streamlit to build interactive, client-facing dashboards for model exploration, scenario planning, and actionable recommendation delivery. Scalability: Engineering the system to support multiple clients across industries with varying data volumes, refresh cycles, and modeling complexities. Responsibilities Develop, validate, and maintain media mix models to evaluate cross-channel marketing effectiveness and return on investment. Engineer and optimize end-to-end data pipelines for ingesting, cleaning, and structuring large, heterogeneous datasets from multiple marketing and business sources. Design, build, and deploy Streamlit-based interactive dashboards and applications for scenario testing, optimization, and reporting. Conduct exploratory data analysis (EDA) and advanced feature engineering to identify drivers of performance. Apply Bayesian methods, regularization, and time-series analysis to improve model accuracy, stability, and interpretability. Implement optimization and scenario-planning algorithms to recommend budget allocation strategies that maximize business outcomes. Collaborate closely with product, engineering, and client teams to align technical solutions with business objectives. Present insights and recommendations to senior stakeholders in both technical and non- technical language. Stay current with emerging tools, techniques, and best practices in media mix modeling, causal inference, and marketing science. Qualifications Bachelor s or Master s degree in Data Science, Statistics, Computer Science, Applied Mathematics, or related field. Proven hands-on experience in media mix modeling, marketing anal
Posted 2 weeks ago
10.0 - 15.0 years
40 - 45 Lacs
bengaluru
Work from Office
Roles & Responsibilities: Define and drive the long-term AI engineering strategy and roadmap aligned with the company s business goals and innovation vision, focusing on scalable AI and machine learning solutions including Generative AI. Lead, mentor, and grow a high-performing AI engineering team, fostering a culture of innovation, collaboration, and technical excellence. Collaborate closely with product, data science, infrastructure, and business teams to identify AI use cases, design end-to-end AI solutions, and integrate them seamlessly into products and platforms. Oversee the architecture, development, deployment, and continuous improvement of AI/ML models and systems, ensuring scalability, robustness, and real-time performance. Own the full AI/ML lifecycle including data strategy, model development, validation, deployment, monitoring, and retraining pipelines. Evaluate and incorporate state-of-the-art AI technologies, frameworks, and external AI services (e.g., APIs, pre-trained models) to accelerate delivery and enhance capabilities. Establish and enforce engineering standards, best practices, and observability tools (e.g., MLflow, Langsmith) for model governance, performance tracking, and compliance with data privacy and security requirements. Collaborate with infrastructure and DevOps teams to design and maintain cloud infrastructure optimized for AI workloads, including GPU acceleration and MLOps automation. Manage project timelines, resource allocation, and cross-team coordination to ensure timely delivery of AI initiatives. Stay abreast of emerging AI trends, research, and tools to continuously evolve the AI engineering function. Required Skills & Qualifications: 10 to 15 years of experience in AI, machine learning, or data engineering roles, with at least 8 years in leadership or managerial positions. Bachelor s, Master s, or PhD degree from a top-tier college in Computer Science, Statistics, Mathematics, or related quantitative fields is strongly preferred. Proven experience leading AI engineering teams and delivering production-grade AI/ML systems at scale. Strong expertise in machine learning algorithms, deep learning, NLP, computer vision, and Generative AI technologies. Hands-on experience with AI/ML frameworks and libraries such as TensorFlow, PyTorch, Keras, Hugging Face Transformers, LangChain, MLflow, and related tools. Solid understanding of data engineering concepts, ETL pipelines, and working knowledge of distributed computing frameworks (Spark, Hadoop). Experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker). Familiarity with software engineering best practices including CI/CD, version control (Git), and microservices architecture. Strong problem-solving skills with a product-oriented mindset and ability to translate business needs into technical solutions. Excellent communication skills to collaborate effectively across technical and non-technical teams. Experience in AI governance, model monitoring, and compliance with data privacy/security standards. Preferred Qualifications: Experience building or managing ML platforms or MLOps pipelines. Knowledge of NoSQL databases (MongoDB, Cassandra) and real-time data processing. Prior exposure to AI in specific domains like banking, finance and credit experience is a strong plus. This role offers the opportunity to lead AI innovation at scale, shaping the future of AI-powered products and services in a fast-growing, technology-driven environment.
Posted 2 weeks ago
8.0 - 13.0 years
35 - 40 Lacs
bengaluru
Work from Office
Roles & Responsibilities Define and lead the data architecture vision and strategy, ensuring it supports analytics, ML, and business operations at scale. Architect and manage cloud-native data platforms using Databricks and AWS, leveraging the lakehouse architecture to unify data engineering and ML workflows. Build and optimize large-scale batch and streaming pipelines using Apache Spark, Airflow, and AWS Glue, ensuring high availability and fault tolerance. Design and develop data marts, warehouses, and analytics-ready datasets tailored for BI, product, and data science teams. Implement robust ETL/ELT pipelines with a focus on reusability, modularity, and automated testing. Enforce and scale data governance practices, including data lineage, cataloging, access management, and compliance with security and privacy standards. Partner with ML Engineers and Data Scientists to build and deploy ML pipelines, leveraging Databricks MLflow, Feature Store, and MLOps practices. Provide architectural leadership across data modeling, data observability, pipeline monitoring, and CI/CD for data workflows. Evaluate emerging tools and frameworks, recommending technologies that align with platform scalability and cost-efficiency. Mentor data engineers and foster a culture of technical excellence, innovation, and ownership across data teams. Required Skills & Qualifications 8+ years of hands-on experience in data engineering, with at least 4 years in a lead or architect-level role. Deep expertise in Apache Spark, with proven experience developing large-scale distributed data processing pipelines. Strong experience with Databricks platform and its internal ecosystem (e.g., Delta Lake, Unity Catalog, MLflow, Job orchestration, Workspaces, Clusters, Lakehouse architecture). Extensive experience with workflow orchestration using Apache Airflow. Proficiency in both SQL and NoSQL databases (e.g., Postgres, DynamoDB, MongoDB, Cassandra) with a deep understanding of schema design, query tuning, and data partitioning. Proven background in building data warehouse/data mart architectures using AWS services like Redshift, Athena, Glue, Lambda, DMS, and S3. Strong programming and scripting ability in Python (preferred) or other AWS-compatible languages. Solid understanding of data modeling techniques, versioned datasets, and performance tuning strategies. Hands-on experience implementing data governance, lineage tracking, data cataloging, and compliance frameworks (GDPR, HIPAA, etc.). Experience with real-time data streaming using tools like Kafka, Kinesis, or Flink. Working knowledge of MLOps tooling and workflows, including automated model deployment, monitoring, and ML pipeline orchestration. Familiarity with MLflow, Feature Store, and Databricks-native ML tooling is a plus. Strong grasp of CI/CD for data and ML pipelines, automated testing, and infrastructure-as-code (Terraform, CDK, etc.). Excellent communication, leadership, and mentoring skills with a collaborative mindset and the ability to influence across functions.
Posted 2 weeks ago
10.0 - 15.0 years
40 - 45 Lacs
bengaluru
Work from Office
Sanas is revolutionizing the way we communicate with the world s first real-time algorithm, designed to modulate accents, eliminate background noises, and magnify speech clarity. Pioneered by seasoned startup founders with a proven track record of creating and steering multiple unicorn companies, our groundbreaking GDP-shifting technology sets a gold standard. Sanas is a 200-strong team, established in 2020. In this short span, we ve successfully secured over $100 million in funding. Our innovation have been supported by the industry s leading investors, including Insight Partners, Google Ventures, Quadrille Capital, General Catalyst, Quiet Capital, and other influential investors. Our reputation is further solidified by collaborations with numerous Fortune 100 companies. With Sanas, you re not just adopting a product; you re investing in the future of communication. We re looking for an experienced and forward-thinking Principal Machine Learning Engineer to lead the design and implementation of our end-to-end Machine Learning infrastructure for industry leading Voice AI products. This is a high impact role where you will shape the technical vision, own strategic architecture decisions, and mentor a growing team of Machine Learning engineers focused on delivering reliable and scalable Machine Learning training and inference systems. You ll work cross-functionally with AI research scientists, Infrastructure and product teams to ensure that Machine Learning infrastructure is designed and built for accelerating innovation through increased experimentation and deployment velocity. You ll help push the boundaries of real-time Voice AI! Key Responsibilities : Architect robust, modular ML pipelines for model experimentation, feature extraction, and production inference Collaborate with data engineering to improve audio dataset quality, labeling pipelines, and feature engineering. Mentor and collaborate with other ML engineers and research scientists to ensure best practices in model development, evaluation, and deployment. Optimize models for latency, memory, and real-time performance on CPU/GPU/edge hardware. Introduce frameworks for continual learning, model versioning, and A/B testing in production. Stay current with advancements in Voice AI, Deep learning and multimodal model architectures. Qualifications: 10+years of experience in Machine Learning Systems, ML workflows with at least 3+years in a technical leadership capacity. Advanced proficiency in Python and ML frameworks like PyTorch , TensorFlow , or JAX . Strong understanding of Deep learning architectures like RNNs, LSTMs, CNNs,Transformers, CTC and their application in Accent translation, Noise cancellation, Acoustic Modeling, Language Modeling and Language Translation. Experience deploying ML models to production (e.g., via ONNX, TensorRT, TorchScript, or custom inference stacks). Nice to Have: Familiarity with audio data and its unique challenges, like large file sizes, time-series features, metadata handling, is a strong plus. Experience with Voice AI models like ASR, TTS and speaker verification. Familiarity with real-time data processing frameworks like Kafka, Flink, Druid and Pinot Familiarity with ML workflows including: MLOps, feature engineering, model training and inference. Experience with labeling tools, audio annotation platforms, or human-in-the-loop annotation pipelines. Experience at a high-growth startup or tech company operating at scale. Deep experience with ML tooling for training and serving models, ideally in audio or speech domains (e.g., PyTorch, ONNX, Hugging Face Transformers,torchaudio). Experience deploying real-time ASR, TTS, or voice synthesis models in production. Background in DSP, audio augmentation, or working with noisy or multilingual datasets. Joining us means contributing to the world s first real-time speech understanding platform revolutionizing Contact Centers and Enterprises alike. Our technology empowers agents, transforms customer experiences, and drives measurable growth. But this is just the beginning. Youll be part of a team exploring the vast potential of an increasingly sonic future
Posted 2 weeks ago
2.0 - 4.0 years
4 - 6 Lacs
bengaluru
Work from Office
SQL Server (SSIS/SSRS) Support Engineer Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai 24x7 support with rotational shifts Job Details As a Microsoft SSIS/SSRS Support Engineer, responsible for managing data loads, handling incidents and changes, analyzing job failures, troubleshooting issues, implementing enhancements (including updating SSIS packages), and driving performance improvements. 24x7 support with rotational shifts. Skill Set: Support Experience Mandatory. Microsoft SSIS, SSRS, SQL. Good communication. Essential Job Functions: Participate in data engineering tasks, including data processing and transformation. Assist in the development and maintenance of data pipelines and infrastructure. Collaborate with team members to support data collection and integration. Contribute to data quality and security efforts. Analyze data using data engineering tools and techniques. Collaborate with data engineers and analysts on data-related projects. Pursue opportunities to enhance data engineering skills and knowledge. Stay updated on data engineering trends and best practices.
Posted 2 weeks ago
1.0 - 2.0 years
1 - 2 Lacs
panipat
Work from Office
Candidates must have the knowledge of excel & word. Share market knowledge person will be preferred first. Typing speed should be high.
Posted 2 weeks ago
0.0 - 2.0 years
3 - 4 Lacs
hyderabad, chennai, bengaluru
Work from Office
Python Developer (Fresher) Job Summary: We are looking for a Python Developer to assist in building scalable applications and automation tools. Key Responsibilities: Write clean and efficient Python code. Learn frameworks like Django or Flask. Work on data processing and scripting tasks. Requirements: Basic understanding of Python. Familiarity with libraries and frameworks. Interest in automation and data handling.
Posted 2 weeks ago
2.0 - 5.0 years
1 - 3 Lacs
navsari
Work from Office
The Back-End Admin will be responsible for maintaining all inventory records across Tally and e-commerce platforms, updating product listings, monitoring stock availability. You will handle billing and invoicing, prepare daily, weekly and Monthly.
Posted 2 weeks ago
2.0 - 4.0 years
5 - 15 Lacs
bengaluru
Work from Office
Remote monitoring of Hydrogen electrolyzer throughout India and across different locations. Safe startup & safe shutdown of the hydrogen electrolyzer based on the customer requirements and involve in diagnostic assistance. Proactively Monitor the operational data obtained through our server and applications. Respond quickly to the call center and ticketing systems in support of customer needs. Assess customer and regional monitoring protocols to support compliant data processing. Alarm server monitoring. Develop several analytics to closely monitor the health and performance of the electrolyser. Prepare the Daily Production Report (DPR) and send it across for the budgeting. Prepare operational assessment reports that includes machine operational performance, trip analysis, structural health and sensor health. Responsible for Condition Monitoring of the system and actively taking decisions based on the current scenario. Safe isolation & restoration of equipment’s for issuance of PTW (Permit to Work) to Maintenance Department with due attention to safety and efficient operation of plant and personnel. Raising of notifications in ERP for any observation made in the electrolyzer and do necessary modifications for system improvement. EDUCATION AND EXPERIENCE REQUIRED : 3-5 Years of experience working on any one of the monitoring software like SCADA, PLC, HMI, DCS is mandatory. Prefer candidates with Remote Monitoring / Condition Monitoring or Control room operations experience from industries like Chemical, Oil & Gas, Process, Power, Battery, Fuel cells etc. Skills:- Condition Monitoring, Control Center Operations, Control Room Management, Control Room Operations, Remote Monitoring Education: - Bachelor of Engineering / Bachelor of Technology (B.E./B.Tech) - Chemical Engineering, Bachelor of Engineering / Bachelor of Technology (B.E./B.Tech) - Electrical Engineering, Bachelor of Engineering / Bachelor of Technology (B.E./B.Tech) - Mechanical Engineering Ohmium is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 2 weeks ago
5.0 - 10.0 years
25 - 40 Lacs
hyderabad, gurugram, bengaluru
Hybrid
Salary: 25 to 40 LPA Exp: 5 to 10 years Location: Bangalore/Hyderabad Notice: Immediate only..!! Key Skills: SQL, Advance SQL, BI tools, ETL etc Roles and Responsibilities Extract, manipulate, and analyze large datasets from various sources such as Hive, SQL databases, and BI tools. Develop and maintain dashboards using Tableau to provide insights on banking performance, market trends, and customer behavior. Collaborate with cross-functional teams to identify key performance indicators (KPIs) and develop data visualizations to drive business decisions. Desired Candidate Profile 6-10 years of experience in Data Analytics or related field with expertise in Banking Analytics, Business Intelligence, Campaign Analytics, Marketing Analytics, etc. . Strong proficiency in tools like Tableau for data visualization; Advance SQL knowledge preferred. Experience working with big data technologies like Hadoop ecosystem (Hive), Spark; familiarity with Python programming language required.
Posted 2 weeks ago
4.0 - 9.0 years
20 - 35 Lacs
pune, gurugram, bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 5 to 8 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using GCP services such as BigQuery, Data Flow, PubSub, Dataproc, and Cloud Storage. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from large datasets stored in Google Cloud SQL databases. Troubleshoot issues related to data processing workflows and provide timely resolutions. Desired Candidate Profile 5-9 years of experience in Data Engineering with expertise GCP & Biq query data engineering. Strong understanding of GCP Cloud Platform Administration including Compute Engine (Dataproc), Kubernetes Engine (K8s), Cloud Storage, Cloud SQL etc. . Experience working on big data analytics projects involving ETL processes using tools like Airflow or similar technologies.
Posted 2 weeks ago
3.0 - 6.0 years
15 - 25 Lacs
pune, gurugram, bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 3 to 8 years Location: Pune/Bangalore/Gurgaon(Hybrid) Notice: Immediate only..!! Key Skills: SQL, Advance SQL, BI tools etc Roles and Responsibilities Extract, manipulate, and analyze large datasets from various sources such as Hive, SQL databases, and BI tools. Develop and maintain dashboards using Tableau to provide insights on banking performance, market trends, and customer behavior. Collaborate with cross-functional teams to identify key performance indicators (KPIs) and develop data visualizations to drive business decisions. Desired Candidate Profile 3-8 years of experience in Data Analytics or related field with expertise in Banking Analytics, Business Intelligence, Campaign Analytics, Marketing Analytics, etc. . Strong proficiency in tools like Tableau for data visualization; Advance SQL knowledge preferred. Experience working with big data technologies like Hadoop ecosystem (Hive), Spark; familiarity with Python programming language required.
Posted 2 weeks ago
0.0 - 4.0 years
1 - 5 Lacs
bengaluru
Remote
Call Handling, Messaging: Answer inbound calls from potential job seeker, listen to their needs, & qualify them. Provide information on WhatsApp. Pass lead to recruitment team for qualified leads - in a professional and timely manner. Work From Home
Posted 2 weeks ago
2.0 - 5.0 years
4 - 6 Lacs
gurugram
Work from Office
Key Responsibilities Data Science & Analysis Design and implement statistical methodologies for experimental design, model calibration, and validation. Utilize frequentist and Bayesian techniques for uncertainty quantification and predictive analytics. Work with process-based data analysis tools (e.g., GDC, ARM, Bioanalytics, Python & SPSS) and ensure statistical rigor in outputs. Perform spatial and geostatistical analysis on large-scale agricultural and environmental datasets. Develop statistical workflows for Monitoring, Reporting, and Verification. Automate analytical pipelines using Python for improved reproducibility and efficiency. Collaborate with interdisciplinary teams of environmental scientists, agronomists, and data specialists. Trial Coordinator 1. Trial Coordination Coordinate and document institutional and in-house regulatory trials (Protocols, proposal letters, MOUs, acceptance letters, final reports). Liaise with trial partners (e.g., SAUs, research institutes) to ensure timely initiation and deficiency-free report submissions. Support CIB query resolution in coordination with Product Development (PD) teams. Assist in planning institutional trial programs for upcoming financial years. 2. Administrative & Operational Support Track and manage PD and BD budgets; maintain detailed payment records. Ensure institutional payment processes and vendor code creation are completed within defined timelines (e.g., 3 weeks). Follow up on GST upload statuses with institutions and stakeholders to close pending uploads within 30 business days. Provide quarterly summaries and gap analyses for GST uploads related to released payments. Coordinate drone services and sample availability for all trials. Assist in dashboard preparation (e.g., MCM Dashboard-12) and internal reporting as required.
Posted 2 weeks ago
0.0 - 4.0 years
2 - 5 Lacs
hyderabad
Remote
Handle data entry for regulatory documents, MSDS, COAs, drug licenses, and audit records. Check for discrepancies in data and coordinate with relevant departments to resolve issues. Support the QA/QC team in maintaining GMP-compliant documentation
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
Join us for a role in "CCO Functions" at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As an AVP Controls Assurance at Barclays, you'll need to have at least 7 years of experience in Controls Assurance/Testing, along with knowledge of applying Data Analytics techniques. You should also possess knowledge of principal risks such as Data governance, data lineage, data quality, Records Management, People Risk, Supplier risk, Premises, etc. A basic minimum educational qualification of Graduate or equivalent is required for this role. Your responsibilities will include developing detailed test plans, identifying weaknesses in internal controls, and communicating key findings to relevant stakeholders. You will collaborate across the bank to maintain a robust and efficient control environment and provide advice on improvements to enhance the banks" internal controls framework. In addition to the essential requirements, some highly valued skills for this role may include having relevant professional certifications (CA, CIA, CS, MBA), knowledge of process re-engineering methodologies such as LEAN/DMAIC/Value Mapping, experience in the financial services industry, and proficiency in Project Management and Change Management. You may be assessed on key critical skills relevant for success in the role, including risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. As an AVP Controls Assurance, you will play a crucial role in advising and influencing decision-making, contributing to policy development, and ensuring operational effectiveness. You will collaborate closely with other functions/business divisions and lead a team to deliver work that impacts the entire business function. Moreover, you will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset to Empower, Challenge, and Drive in your daily interactions and decision-making processes. Location: Pune Purpose of the role: To partner with the bank in providing independent assurance on control processes and advising on improvements to enhance the efficiency and effectiveness of the bank's internal controls framework. Key Responsibilities: - Collaboration across the bank to maintain a satisfactory, robust, and efficient control environment. - Development of detailed test plans and procedures to identify weaknesses in internal controls. - Communication of key findings and observations to relevant stakeholders. - Development of a knowledge center containing detailed documentation of control assessments and testing. Assistant Vice President Expectations: - Advise and influence decision-making and contribute to policy development. - Lead a team performing complex tasks and set objectives for employees. - Demonstrate leadership behaviors to create an environment for colleagues to thrive. - Engage in complex analysis of data to solve problems effectively. - Demonstrate the Barclays Values and Mindset in everyday actions. In summary, as an AVP Controls Assurance at Barclays, you will have the opportunity to drive innovation, enhance customer experiences, and play a key role in maintaining a robust control environment across the bank.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You are a Quality Executive in a non-voice BPO company located in Gurgaon, Udyog Vihar Phase-4. With 3-4 years of experience in data processing, your main responsibility is to ensure the quality of data and processes meet organizational standards and client expectations. This involves monitoring, evaluating, and analyzing data to identify improvement opportunities and enhance service delivery. You will collaborate with team members, analyze performance trends, and recommend corrective actions. Your key responsibilities include monitoring data for quality assurance, analyzing data for compliance, preparing quality reports, highlighting performance issues, and collaborating with the operations team for process refinement. You must suggest workflow improvements, participate in calibration sessions, ensure quality protocols are followed, and document audit results and corrective actions. To excel in this role, you need a Bachelor's degree, 3-4 years of experience in quality assurance in the BPO industry, strong analytical skills, excellent communication skills, proficiency in MS Office, and attention to detail. Familiarity with quality management tools like Six Sigma is a plus. You should be able to evaluate large volumes of data, have a customer-focused mindset, strong interpersonal skills, and be proactive and self-motivated. In return, you will receive a competitive salary, benefits package, a dynamic work environment, career growth opportunities, and a supportive team culture. This is a permanent position with benefits like Provident Fund, day shifts, and a performance bonus. The preferred language is English, and the work location is in person. If you are interested in this role, please contact the employer at +91 9871868333 for further discussions.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
patiala, punjab
On-site
RBH Solutions Pvt. Ltd. is in search of a proficient Cloud / System Architect to take charge of designing, executing, and overseeing scalable cloud and on-premise infrastructure solutions. Your role entails leveraging your expertise in real-time systems, coupled with a thorough grasp of enterprise deployment frameworks, microservices architecture, and cybersecurity practices. With a minimum of 3 years of hands-on experience in a similar capacity, you will be tasked with delving into AI/ML concepts and their seamless integration into cloud systems. Your familiarity with AI-based tools will play a pivotal role in augmenting coding, testing, automation, and deployment workflows. Moreover, a solid understanding of real-time systems, IoT, and energy management will be advantageous. Your responsibilities will revolve around crafting and overseeing infrastructure spanning virtual machines (VMs), Linux, Windows, and physical servers. By developing and executing enterprise-level cloud strategies and deployment frameworks, you will architect microservices-based solutions catering to real-time database applications. Furthermore, you will be entrusted with offering unified deployment solutions across on-premises, AWS, Azure, and Google Cloud. A critical aspect of your role will involve defining tools and strategies for data ingestion, storage, processing, and analysis. Your ability to optimize system architecture for enhanced performance, cost-efficiency, and scalability will be crucial. Ensuring compliance with project scope, preparing functional specifications, and monitoring cloud infrastructure performance are among the key duties. Security will be a key focus area, where your expertise will be instrumental in contributing to security requirements for RFPs/RFIs. This will encompass various facets such as network security, network access control, data loss prevention, and security information and event management. Upholding system security and data privacy across all infrastructure layers, conducting or supporting cybersecurity testing, and integrating secure-by-design principles throughout infrastructure planning are paramount. Ideal candidates should hold a Bachelors or Masters degree in Computer Science, Information Technology, Electronics, or a related engineering field. Proficiency in Linux, Windows operating systems, strong communication skills for cross-functional collaboration, programming knowledge in Python, C#, and Java are prerequisites. Additionally, a profound understanding of cloud security principles, the ability to automate and integrate IT system processes, and familiarity with PostgreSQL are desirable. This is a full-time position with a day shift schedule based in Patiala, Punjab.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
About Ideapoke: Ideapoke is a global, fast-growing start-up with offices in Bengaluru, Bay Area, Tokyo, and Shanghai. Our software, search, and insights power the innovation agenda of the largest Fortune 500 and Global 2000 companies worldwide. Our growth is fueled by our people and their unwavering commitment to the company-wide vision, strong work ethic, and an entrepreneurial do-it-all spirit. We believe that innovation amplifies success in every piece of work we do and by extension, amplifies the success of our clients. Ideapoke values constant learning, growth, and making a difference. Join us and be part of our story. Sr. Data Scientist: We are seeking applicants with a demonstrated research background in machine learning, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research. The candidate will collaborate with researchers and engineers of multiple disciplines within Ideapoke, particularly with researchers in data collection and development teams, to create advanced data analytics solutions and work with massive amounts of data collected from various sources. Roles and Responsibilities: - Collaborate with product/business owners to transform business requirements into products/productized solutions or working prototypes of AI algorithms. - Evaluate and compare algorithm performance based on large, real-world datasets. - Extract insights and identify patterns from massive amounts of data using machine learning techniques and complex network analysis methods. - Design and implement ML algorithms and models through in-depth research and experimentation with neural network models, parameter optimization, and optimization algorithms. - Accelerate the distributed implementation of existing algorithms and models. - Conduct research to advance deep learning and provide technical solutions at scale for real-world challenges. - Establish scalable, efficient, automated processes for model development, validation, implementation, and large-scale data analysis. - Process large amounts of data in a distributed cloud environment. - Possess strong knowledge and experience in data extraction and processing. - Optimize pre-existing algorithms for accuracy and speed. - Demonstrate a flexible approach and the ability to develop new skills. - Be a self-starter capable of managing multiple research projects. - Exhibit team player characteristics and effective communication skills. Skills and Experiences Required: - Ph.D./Master's degree/B.Tech/B.E. from an accredited college/university in Computer Science, Statistics, Mathematics, Engineering, or related fields (strong mathematical/statistics background with the ability to understand algorithms and methods from a mathematical and intuitive viewpoint). - 3 to 4 years of academic or professional experience in Artificial Intelligence, Data Analytics, Machine Learning, Natural Language Processing/Text mining, or related fields. - Technical proficiency and hands-on expertise in Python, R, XML parsing, Big Data, NoSQL, and SQL.,
Posted 2 weeks ago
5.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
Wipro Limited is a leading technology services and consulting company dedicated to creating innovative solutions for clients" complex digital transformation needs. With a global presence of over 230,000 employees and business partners across 65 countries, Wipro aims to help customers, colleagues, and communities thrive in an ever-evolving world. As a Generative AI Testing professional at Wipro, you will be responsible for collaborating with the AI testing team to plan and execute test plans, test cases, and test scenarios for AI systems. Your role will involve developing, executing, maintaining, and enhancing automated testing frameworks and scripts specifically tailored for AI component testing. You will also be expected to implement quality assurance standards to ensure the accuracy, reliability, and performance of AI solutions. In this position, you will design and implement benchmarking tests to evaluate AI system performance against industry standards and competitors. Additionally, expanding your knowledge in testing deep learning algorithms and various model families will be a key aspect of your role. Proficiency in Python and related packages for image processing, data processing, and automation testing is required. Familiarity with machine/deep learning frameworks such as TensorFlow, Keras, or PyTorch is essential. An understanding of the Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC), with a focus on AI-specific testing phases and activities, is crucial for this position. You should also have experience in testing AI-driven applications across diverse platforms. The mandatory skill for this role is Gen AI Automation Testing with a required experience of 5-8 years. Join Wipro and be part of a modern, end-to-end digital transformation partner with bold ambitions. We are looking for individuals inspired by reinvention, eager to evolve their skills and careers. At Wipro, you will have the opportunity to be part of a business powered by purpose and a culture that encourages you to design your own reinvention. Realize your ambitions with us at Wipro, where applications from people with disabilities are explicitly welcome.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As an AIML Engineer with 4+ years of experience located in Bopal, Ahmedabad (on-Site), your main responsibilities will include designing, developing, and implementing AIML solutions to address complex business challenges. You will collaborate closely with cross-functional teams to grasp project requirements and seamlessly integrate AIML capabilities into existing systems. Your core focus will be on Algorithm Development, where you will create and deploy machine learning algorithms for tasks such as natural language processing and sentiment analysis. Your expertise will be crucial in fine-tuning and optimizing these algorithms to enhance performance and accuracy. In the realm of Data Processing and Analysis, you will be expected to preprocess and analyze extensive datasets to uncover valuable insights and patterns. Working alongside data engineers, you will ensure the quality and integrity of the data being used. Model Training and Evaluation will be another key area of your role. You will be responsible for training machine learning models using relevant frameworks and libraries, evaluating their performance, and continuously refining and enhancing them. Integration and Deployment will be a significant part of your duties, involving the seamless integration of AIML solutions into existing applications and systems. Collaboration with software developers is essential to deploy these models effectively in production environments. Your role will also entail thorough Documentation practices, where you will document code, algorithms, and models for future reference and collaboration. Additionally, you will prepare technical documentation for end-users and stakeholders to ensure transparency and understanding. Staying Updated on Industry Trends is crucial in this dynamic field. You will be expected to keep yourself informed about the latest developments in AIML and related technologies, incorporating emerging trends and best practices to optimize existing solutions and stay ahead of the curve.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |