Home
Jobs

3318 Databricks Jobs - Page 40

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

As an AI Engineer specializing in Machine Learning and Natural Language Processing, you will lead the development and deployment of state-of-the-art AI solutions and products for a diverse client base. What You’ll do Design, develop, and operationalize advanced NLP models such as summarization, question-answering, intent recognition, dialog/conversational AI, semantic search, named entity recognition, knowledge discovery, document understanding, and text classification. Work with a wide range of datasets from various sources including websites, wikis, enterprise applications, document stores, file systems, conversation platforms, social media, and databases. Employ leading-edge algorithms and models from TensorFlow, PyTorch, and Hugging Face, and engage with next-gen LLM frameworks like Langchain and Guardrails. Utilize modern MLOps practices to evaluate, manage, and deploy models efficiently and effectively in production environments. Develop and refine tools and processes to improve model performance and reproducibility across multiple customer engagements. Build and maintain robust, scalable solutions using cloud infrastructure such as AWS and Databricks to deploy LLM-powered systems. Create evaluation datasets, conduct rigorous model testing to ensure they meet high standards of accuracy and usability, and present findings and models effectively using platforms like Jupyter Notebooks. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. A Master's degree or relevant certification would be a plus. Strong experience with web frameworks like ReactJS, NextJS or Vue.js Strong programming skills in languages such as Python, Bash. Excellent analytical and problem-solving skills, and attention to detail. Exceptional communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Benefits What We Offer An opportunity to be part of an agile, highly proficient and experienced AI/ML team An opportunity to work on challenging data science and machine learning problems with customers and seeing your work deployed in action A fast-paced software development environment that uses the latest open-source tools across the development stack Benefits We provide a competitive salary and benefits package, a vibrant work environment, and numerous opportunities for professional growth. You'll have the opportunity to work with a team of industry experts on exciting projects that transform businesses and create significant value. Join us to revolutionize the way companies leverage technology for digital transformation. OnebyZero is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT THE ROLE Role Description: We are seeking an accomplished and visionary Data Scientist/ GenAI developer to join Amgens Enterprise Data Management team. As part of MDM team, you will be responsible for design ing , develop ing , and deploy ing Generative AI and ML models to power data-driven decisions across business domains. This role is ideal for an AI practitioner who thrives in a collaborative environment and brings a strategic mindset to applying advanced AI techniques to solve real-world problems. To succeed in this role, the candidate must have strong AI/ML, Data Science , GenAI experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have AI/ML, data science and GenAI experience on technologies like ( PySpark / PyTorch , TensorFlow, LLM , Autogen , Hugging FaceVectorDB , Embeddings, RAGs etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Develop enterprise-level GenAI applications using LLM frameworks such as Langchain, Autogen, and Hugging Face. Design and develop intelligent pipelines using PySpark, TensorFlow, and PyTorch within Databricks and AWS environments. Implement embedding models andmanage VectorStores for retrieval-augmented generation (RAG) solutions. Integrate and leverage MDM platforms like Informatica and Reltio to supply high-quality structured data to ML systems. Utilize SQL and Python for data engineering, data wrangling, and pipeline automation. Build scalable APIs and services to serve GenAI models in production. Lead cross-functional collaboration with data scientists, engineers, and product teams to scope, design, and deploy AI-powered systems. Ensure model governance, version control, and auditability aligned with regulatory and compliance expectations. Basic Qualifications and Experience: Masters degree with 4 - 6 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 6 - 9 years of experience in Business, Engineering, IT or related field OR Diploma with 10 - 12 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: 6+ years of experience working in AI/ML or Data Science roles, including designing and implementing GenAI solutions. Extensive hands-on experience with LLM frameworks and tools such as Langchain, Autogen, Hugging Face, OpenAI APIs, and embedding models. Strong programming background with Python, PySpark, and experience in building scalable solutions using TensorFlow, PyTorch, and SK-Learn. Proven track record of building and deploying AI/ML applications in cloud environments such as AWS. Expertise in developing APIs, automation pipelines, and serving GenAI models using frameworks like Django, FastAPI, and DataBricks. Solid experience integrating and managing MDM tools (Informatica/Reltio) and applying data governance best practices. Guide the team on development activities and lead the solution discussions Must have core technical capabilities in GenAI, Data Science space Good-to-Have Skills: Prior experience in Data Modeling, ETL development, and data profiling to support AI/ML workflows. Working knowledge of Life Sciences or Pharma industry standards and regulatory considerations. Proficiency in tools like JIRA and Confluence for Agile delivery and project collaboration. Familiarity with MongoDB, VectorStores, and modern architecture principles for scalable GenAI applications. Professional Certifications : Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL) Any cloud certification (AWS or AZURE) Data Science and ML Certification Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Company At Delaplex, we believe true organizational distinction comes from exceptional products and services. Founded in 2008 by a team of like-minded business enthusiasts, we have grown into a trusted name in technology consulting and supply chain solutions. Our reputation is built on trust, innovation, and the dedication of our people who go the extra mile for our clients. Guided by our core values, we don’t just deliver solutions, we create meaningful impact. Primary Responsibilities The ideal candidate will work with multiple small agile teams to deliver solutions in data and analytics technologies. Serve as Scrum Master for Agile teams delivering data and analytics solutions for a large manufacturing company. Work closely with Product Owners to align on business priorities, maintain a clear and actionable backlog, and ensure stakeholder needs are met. Facilitate core Agile ceremonies: Sprint Planning, Daily Standups, Backlog Refinement, Reviews, and Retrospectives. Guide the team through data-focused sprints, including work on ingestion, transformation, integration, and reporting. Track progress, remove blockers, and drive continuous improvement in team performance and delivery. Collaborate with data engineers, analysts, architects, and business teams to ensure high-quality, end-to-end solutions. Promote Agile best practices across platforms like SAP ECC, IBP, HANA, BOBJ, Databricks, and Tableau. Monitor and share Agile metrics (e.g., velocity, burn-down) to keep teams and stakeholders aligned. Support team capacity planning, identify bottlenecks early, and help the team stay focused and accountable. Foster a culture of collaboration, adaptability, and frequent customer feedback to ensure business value is delivered in every sprint. Orient the team to focus on the objects that are to be built more so than the tasks required to build them. The point is to build things, not complete tasks. Guide the team to continuously break down efforts to smaller components. Smaller workpieces result in better flow. Having 8 stories of ½ day each is better than having 1 story of 4 days. Guide the team to always provide clarity on the stories by using detailed descriptions and explicit acceptance criteria. Bring the team’s focus in the daily standup meetings to completing things instead of working on things. Must Have: 3-5 years of experience as Scrum Master with focus on SAP, HANA, Data & Analytics. Solid understanding of standard scrum practices and “ceremonies”. Solid understanding of the core principles of being agile – that being truly agile is about more than walking through the “ceremonies”. Ability to grasp the nuances of the team’s dynamics and nudge the team to better interactions. Excellent organizational, interpersonal, time-management, analytical and critical thinking skills. Ability to track team velocity and help the team set sprint goals that are both ambitious and doable. Excellent written and verbal communication skills, including clear articulation of business impact and technical constraints tailored to the audience. Flexibility to work occasional hours outside normal business, according to business needs. Ability to work with people across the globe. Skills: analytical skills,scrum master,critical thinking,ceremonies,agile,teams,scrum,data and analytics,hana,analytics,agile methodologies,interpersonal skills,organizational skills,time management,sap,focus,data,excellent communication Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Data Project/Program/ Delivery Manager Technical skills: Primary Skill: SQL, Datawarehouse, Azure tech stack – Azure Data Factory (ADF), Databricks, Synapse Secondary Skills: Knowledge of any BI reporting tool like Power BI or Tableau Good to have skills: Candidates having these extra skills on their resumes are an added advantage while assessing the profile. Some of them are as follows: Expertise in Insurance domain EXPERIENCE/SKILLS: Min 15 years work experience in successful delivery of complex data related projects end to end. Experience in Agile or DataOps delivery, quality practices, techniques, and tools at all layers of data engineering Tech-savvy and good understanding of recent technologies incl. Azure cloud API, inclusion of unstructured data, business intelligence tools. Familiarity with JIRA and other prioritization tools Knowledge and experience with project management methodologies (Agile/Waterfall) to work with intricate, multifaceted projects. Excellent communication and coordination skills Comfortable with changing and flexible requirements from business owner Customer oriented attitude High degree of self-motivation Experience managing third party relationships in the successful achievement of customer deliveries. Demonstrated track record of delivering high quality projects & programs up to medium to large sized accounts. Demonstrated experience in successful delivery of complex data related projects end to end. Ability to communicate clearly to all levels and present to senior leadership. Ability to lead, motivate & direct med-large sized engineering delivery teams. Ability to help define delivery management core processes and improvement opportunities. Demonstrated attentiveness to quality and productivity as outcomes. Advanced analytical, problem solving, negotiation and organizational skills. Ability to manage significant delivery budgets and minimize program variances. Strong ability to lead teams across multiple shores. Strong ETL skills and working experience with SSIS and related functions. Knowledge of data warehouse and data lake frameworks Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Company At Delaplex, we believe true organizational distinction comes from exceptional products and services. Founded in 2008 by a team of like-minded business enthusiasts, we have grown into a trusted name in technology consulting and supply chain solutions. Our reputation is built on trust, innovation, and the dedication of our people who go the extra mile for our clients. Guided by our core values, we don’t just deliver solutions, we create meaningful impact. Primary Responsibilities The ideal candidate will work with multiple small agile teams to deliver solutions in data and analytics technologies. Serve as Scrum Master for Agile teams delivering data and analytics solutions for a large manufacturing company. Work closely with Product Owners to align on business priorities, maintain a clear and actionable backlog, and ensure stakeholder needs are met. Facilitate core Agile ceremonies: Sprint Planning, Daily Standups, Backlog Refinement, Reviews, and Retrospectives. Guide the team through data-focused sprints, including work on ingestion, transformation, integration, and reporting. Track progress, remove blockers, and drive continuous improvement in team performance and delivery. Collaborate with data engineers, analysts, architects, and business teams to ensure high-quality, end-to-end solutions. Promote Agile best practices across platforms like SAP ECC, IBP, HANA, BOBJ, Databricks, and Tableau. Monitor and share Agile metrics (e.g., velocity, burn-down) to keep teams and stakeholders aligned. Support team capacity planning, identify bottlenecks early, and help the team stay focused and accountable. Foster a culture of collaboration, adaptability, and frequent customer feedback to ensure business value is delivered in every sprint. Orient the team to focus on the objects that are to be built more so than the tasks required to build them. The point is to build things, not complete tasks. Guide the team to continuously break down efforts to smaller components. Smaller workpieces result in better flow. Having 8 stories of ½ day each is better than having 1 story of 4 days. Guide the team to always provide clarity on the stories by using detailed descriptions and explicit acceptance criteria. Bring the team’s focus in the daily standup meetings to completing things instead of working on things. Must Have: 3-5 years of experience as Scrum Master with focus on SAP, HANA, Data & Analytics. Solid understanding of standard scrum practices and “ceremonies”. Solid understanding of the core principles of being agile – that being truly agile is about more than walking through the “ceremonies”. Ability to grasp the nuances of the team’s dynamics and nudge the team to better interactions. Excellent organizational, interpersonal, time-management, analytical and critical thinking skills. Ability to track team velocity and help the team set sprint goals that are both ambitious and doable. Excellent written and verbal communication skills, including clear articulation of business impact and technical constraints tailored to the audience. Flexibility to work occasional hours outside normal business, according to business needs. Ability to work with people across the globe. Skills: analytical skills,scrum master,critical thinking,ceremonies,agile,teams,scrum,data and analytics,hana,analytics,agile methodologies,interpersonal skills,organizational skills,time management,sap,focus,data,excellent communication Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

🚀 We’re Hiring: Senior Data Engineer (Remote – India | Full-time or Contract) We are helping our client hire a Senior Data Engineer with over 10 years of experience in modern data platforms. This is a remote role open across India , and available on both full-time and contract basis. 💼 Position: Senior Data Engineer 🌍 Location: Remote (India) 📅 Type: Full-Time / Contract 📊 Experience: 10+ Years 🔧 Must-Have Skills: Data Engineering, Data Warehousing, ETL Azure Databricks & Azure Data Factory (ADF) PySpark, SparkSQL Python, SQL 👀 What We’re Looking For: A strong background in building and managing data pipelines Hands-on experience in cloud platforms, especially Azure Ability to work independently and collaborate in distributed teams 📩 How to Apply: Please send your resume to [your email] with the subject line: "Senior Data Engineer – Remote India" ⚠️ Along with your resume, kindly include the following details: Full Name Mobile Number Total Experience Relevant Experience Current CTC Expected CTC Notice Period Current Location Are you fine with Contract or Full time or both? Willing to work IST/US overlapping hours: Yes/No Do you have a PF account? (Yes/No) 🔔 Follow our company page to stay updated on future job openings! #DataEngineer #AzureDatabricks #ADF #PySpark #SQL #RemoteJobsIndia #HiringNow #Strive4X #ContractJobs #FullTimeJobs #IndiaJobs Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description PayPay's rapid growth necessitates the expansion of its product teams and underscores the critical need for a resilient Data Engineering Platform. This platform is vital to support our increasing business demands. The Data Pipeline team is tasked with creating, deploying, and managing this platform, utilizing leading technologies like Databricks, Delta Lake, Spark, PySpark, Scala, and the AWS suite. We are actively seeking skilled Data Engineers to join our team and contribute to scaling our platform across the organization. Main Responsibilities Create and manage robust data ingestion pipelines leveraging Databricks, Airflow, Kafka, and Terraform. Ensure high performance, reliability, and efficiency by optimizing large-scale data pipelines. Develop data processing workflows using Databricks, Delta Lake, and Spark technologies. Maintain and improve the Data Lakehouse, utilizing Unity Catalog for efficient data management and discovery. Construct automation, frameworks, and enhanced tools to streamline data engineering workflows. Collaborate across teams to facilitate smooth data flow and integration. Enforce best practices in observability, data governance, security, and regulatory compliance Qualifications Minimum 5 years as a Data Engineer or similar role. Hands-on experience with Databricks, Delta Lake, Spark, and Scala. Proven ability to design, build, and operate Data Lakes or Data Warehouses. Proficiency with Data Orchestration tools (Airflow, Dagster, Prefect). Familiarity with Change Data Capture tools (Canal, Debezium, Maxwell). Strong command of at least one primary language (Scala, Python, etc.) and SQL. Experience with data catalog and metadata management (Unity Catalog, Lakeformation). Experience in Infrastructure as Code (IaC) using Terraform. Excellent problem-solving and debugging abilities for complex data challenges. Strong communication and collaboration skills. Capability to make informed decisions, learn quickly, and consider complex technical contexts. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Java Developer Job Type: Contract (6 Months) Location: Pune Role Overview We are seeking a skilled Java Developer for a 6-month contract role based in Pune. The ideal candidate will have strong hands-on experience in Java-based enterprise application development and a solid understanding of cloud technologies. Key Responsibilities Analyze customer/internal requirements and translate them into software design documents; present RFCs to the architecture team Write clean, high-quality, maintainable code based on approved designs Conduct thorough unit and system-level testing to ensure software reliability Collaborate with cross-functional teams to analyze, design, and deliver applications Ensure optimal performance, scalability, and responsiveness of applications Take technical ownership of assigned features Provide mentorship and support to team members for resolving technical and functional issues Review and approve peer code through pull requests Must-Have Skills Frameworks/Technologies: Spring Boot, Spring AOP, Spring MVC, Hibernate, Play, REST APIs, Microservices Programming Languages: Core Java, Java 8 (streams, lambdas, fluent-style programming), J2EE Database: Strong SQL skills with the ability to write complex queries DevOps: Hands-on experience with CI/CD pipelines Cloud: Solid understanding of AWS services such as S3, Lambda, SNS, SQS, IAM Roles, Kinesis, EMR, Databricks Coding Practices: Scalable and maintainable code development; experience in cloud-native application development Nice-to-Have Skills Additional Languages/Frameworks: Golang, React, OAuth, SCIM Databases: NoSQL, Redshift AWS Tools: KMS, CloudWatch, Caching, Notification Services, Queues Candidate Requirements Proven experience in core application development Strong communication and interpersonal skills Proactive attitude with a willingness to learn new technologies and products Collaborative team player with a growth mindset Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for an enthusiastic Machine Learning Engineer to join our growing team. The hire will be responsible for working in collaboration with other data scientists and engineers across the organization to develop production-quality models for a variety of problems across Razorpay. Some possible problems include : making recommendations to merchants from Razorpay’s suite of products, cost optimisation of transactions for merchants, automatic address disambiguation / correction to enable tracking customer purchases using advanced natural language processing techniques, computer vision techniques for auto-verifications, running large-scale bandit experiments to optimize Razorpay’s merchant facing web pages at scale, and many more. In addition to this, we expect the MLE to be adept at productionising ML models using state-of-the-art systems. As part of the DS team @ Razorpay, you’ll work with some of the smartest engineers/architects/data scientists/product leaders in the industry and have the opportunity to solve complex and critical problems for Razorpay. As a Senior MLE, you will also have the opportunity to partner with and be mentored by senior engineers across the organization and lay the foundation for a world-class DS team here at Razorpay. You come and work with the right attitude, fun and growth guaranteed! Required qualifications 5+ years of experience doing ML in a production environment and productionising ML models at scale Bachelors (required) or Masters in a quantitative field such as Computer science, operations research, statistics, mathematics, physics Familiarity with basic machine learning techniques : regression, classification, clustering, model metrics and performance (AUC, ROC, precision, recall and their various flavors) Basic knowledge of advanced machine learning techniques : regression, clustering, recommender systems, ranking systems and neural networks Expertise in coding in python and good knowledge of at least one language from C, C++, Java and at least one scripting language (perl, shell commands) Experience with big data tools like Spark and experience working with Databricks / DataRobots Experience with AWS’ suite of tools for production-quality ML work, or alternatively familiarity with Microsoft Azure / GCP Experience deploying complex ML algorithms to production in collaboration with engineers using Flask, MLFlow, Seldon, etc. Good to have: Excellent communication skills and ability to keep stakeholders informed of progress / blockers Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

The Role: We are looking for an enthusiastic Senior Data Scientist to join our growing team. The hire will be responsible for working in collaboration with other data scientists and engineers across the organization to develop production-quality models for a variety of problems across Razorpay. Some possible problems include : making recommendations to merchants from Razorpay’s suite of products, cost optimization of transactions for merchants, automatic address disambiguation / correction to enable tracking customer purchases using advanced natural language processing techniques. As part of the DS team @ Razorpay, you’ll work with some of the smartest engineers/architects/data scientists in the industry and have the opportunity to solve complex and critical problems for Razorpay. Responsibilities: Apply advanced data science, mathematics, and machine learning techniques to solve complex business problems. Collaborate with cross-functional teams to design and deploy data science solutions. Analyze large volumes of data to derive actionable insights. Present findings and recommendations to stakeholders, effectively communicating complex concepts. Identify key metrics, conduct exploratory data analysis, and create executive-level dashboards. Manage multiple projects in a fast-paced environment, ensuring high-quality deliverables. Train and maintain machine learning models, utilizing deep learning frameworks and big data tools. Continuously improve solutions, evaluating their effectiveness and optimizing performance. Deploy data-driven solutions and effectively communicate results to stakeholders. Mandatory Qualifications: 5+ years experience working with machine learning in a production environment. Bachelor's or Master's degree in a quantitative field (e.g., Computer Science, Operations Research, Statistics, Mathematics, Physics). Strong knowledge of fundamental machine learning techniques, such as regression, classification, clustering, and model evaluation metrics. Proficiency in Python and familiarity with languages like C, C++, or Java. Experience with scripting languages like Perl and command-line Unix is a plus. Experience with deep learning frameworks (TensorFlow, Keras, PyTorch) and big data tools like Spark, and 2-3 years experience in building production-quality machine learning code on platforms like Databricks Experience with AWS / GCP / Microsoft Azure for building production quality ML models and systems Ability to conduct end-to-end ML experimentation, including model experimentation, success reporting, A/B testing, and testing metrics. Excellent communication skills and the ability to keep stakeholders informed of progress and potential blockers. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Profile: Sr. DW BI Developer Location: Sector 64, Noida (Work from Office) Position Overview: Working with the Finance Systems Manager, the role will ensure that ERP system is available and fit for purpose. The ERP Systems Developer will be developing the ERP system, providing comprehensive day-to-day support, training and develop the current ERP System for the future. Key Responsibilities: As a Sr. DW BI Developer, the candidate will participate in the design / development / customization and maintenance of software applications. As a DW BI Developer, the person should analyse the different applications/Products, design and implement DW using best practices. Rich data governance experience, data security, data quality, provenance / lineage. The candidate will also be maintaining a close working relationship with the other application stakeholders. Experience of developing secured and high-performance web application(s) Knowledge of software development life-cycle methodologies e.g. Iterative, Waterfall, Agile, etc. Designing and architecting future releases of the platform. Participating in troubleshooting application issues. Jointly working with other teams and partners handling different aspects of the platform creation. Tracking advancements in software development technologies and applying them judiciously in the solution roadmap. Ensuring all quality controls and processes are adhered to. Planning the major and minor releases of the solution. Ensuring robust configuration management. Working closely with the Engineering Manager on different aspects of product lifecycle management. Demonstrate the ability to independently work in a fast-paced environment requiring multitasking and efficient time management. Required Skills and Qualifications: End to end Lifecyle of Data warehousing, DataLakes and reporting Experience with Maintaining/Managing Data warehouses. Responsible for the design and development of a large, scaled-out, real-time, high performing Data Lake / Data Warehouse systems (including Big data and Cloud). Strong SQL and analytical skills. Experience in Power BI, Tableau, Qlikview, Qliksense etc. Experience in Microsoft Azure Services. Experience in developing and supporting ADF pipelines. Experience in Azure SQL Server/ Databricks / Azure Analysis Services Experience in developing tabular model. Experience in working with APIs. Minimum 2 years of experience in a similar role Experience with data warehousing, data modelling. Strong experience in SQL 2-6 years of total experience in building DW/BI systems Experience with ETL and working with large-scale datasets. Proficiency in writing and debugging complex SQLs. Prior experience working with global clients. Hands on experience with Kafka, Flink, Spark, SnowFlake, Airflow, nifi, Oozie, Pig, Hive,Impala Sqoop. Storage like HDFS , Object Storage (S3 etc), RDBMS, MPP and Nosql DB. Experience with distributed data management, data sfailover,luding databases (Relational, NoSQL, Big data, data analysis, data processing, data transformation, high availability, and scalability) Experience in end-to-end project implementation in Cloud (Azure / AWS / GCP) as a DW BI Developer Rich data governance experience, data security, data quality, provenance / lineagHive, Impalaerstanding of industry trends and products in dataops , continuous intelligence , Augmented analytics , and AI/ML. Prior experience of working in cloud like Azure, AWS and GCP Prior experience of working with Global Clients To know our Privacy Policy, please click on the link below or copy paste the URL on your browser: https://gedu.global/wp-content/uploads/2023/09/GEDU-Privacy-Policy-22092023-V2.0-1.pdf Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Data Analyst - Product Analytics Job Description 4+ years of experience in data analytics with at least 2 years in product analytics. * Proficiency in SQL and Python for data analysis; strong understanding of experimentation methods and causal inference. * Experience working with Azure Data Services and Databricks (or equivalent cloud-based platforms). * Expertise in building compelling, user-friendly dashboards in Power BI. * Strong communication skills and ability to influence cross-functional stakeholders. * Passionate about solving customer problems and improving products through data. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Job Title: Senior Software Engineer Job Type: Full-time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a highly skilled Senior Software Engineer to join one of our top customers., committed to designing and implementing high-performance microservices. The ideal candidate will have extensive experience with Python, FastAPI, task queues, web sockets and Kubernetes to build scalable solutions for our platforms. This is an exciting opportunity for those who thrive in challenging environments and have a passion for technology and innovation. Key Responsibilities: Design and develop backend services using Python, with an emphasis on FastAPI for high-performance applications. Architect and orchestrate microservices to handle high concurrency I/O requests efficiently. Deploy and manage applications on AWS, ensuring robust and scalable solutions are delivered. Implement and maintain messaging queues using Celery, RabbitMQ, or AWS SQS. Utilize WebSockets and asynchronous programming to enhance system responsiveness and performance. Collaborate with cross-functional teams to ensure seamless integration of solutions. Continuously improve system reliability, scalability, and performance through innovative design and testing. Required Skills and Qualifications: Proven experience in production deployments with user bases exceeding 10k. Expertise in Python and FastAPI, with strong knowledge of microservices architecture. Proficiency in working with queues and asynchronous programming. Hands-on experience with databases such as Postgres, MongoDB, or Databricks. Comprehensive knowledge of Kubernetes for running scalable microservices. Exceptional written and verbal communication skills. Consistent work history without overlapping roles or career gaps. Preferred Qualifications: Experience with GoLang for microservice development. Familiarity with data lake technologies such as Iceberg. Understanding of deploying APIs in Kubernetes environments. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

On-site

Linkedin logo

What You’ll Be Doing Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications Bachelor's degree in Engineering, Computer Science, or relevant fi eld. 10+ years of relevant and recent experience in a Data Engineer role. 5+ years recent experience with Apache Spark and solid understanding of the fundamentals. Deep understanding of Big Data concepts and distributed systems. Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease. Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL. Cloud Experience with DataBricks Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON. Skills: java,data infrastructure,sql,parquet,big data,databricks,relational databases,scala,postgres,data engineering,apache spark,delta tables,python,big data concepts,distributed systems,json,mysql,csv,postgresql Show more Show less

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Description: Overall Purpose This position will interact on a consistent basis with other developers, architects, data product owners and source systems. This position requires multifaceted candidates who have experience in data analysis, visualization, good hands-on experience with BI Tools and relational databases, experience in data warehouse architecture (traditional and cloud). Key Roles and Responsibilities Develop, understand, and enhance code in traditional data warehouse environments, data lake, and cloud environments like Snowflake, Azure, Databricks Build new end-to-end business intelligence solutions. This includes data extraction, ETL processes applied on data to derive useful business insights, and best representing this data through dashboards. Write complex SQL queries used to transform data using Python/Unix shell scripting Understand business requirements and create visual reports and dashboards using Power BI or Tableau. Upskill to different technologies, understand existing products and programs in place Work with other development and operations teams. Flexible with shifts and occasional weekend support. Key Competencies Full life-cycle experience on enterprise software development projects. Experience in relational databases/ data marts/data warehouses and complex SQL programming. Extensive experience in ETL, shell or python scripting, data modelling, analysis, and preparation Experience in Unix/Linux system, files systems, shell scripting. Good to have knowledge on any cloud platforms like AWS, Azure, Snowflake, etc. Good to have experience in BI Reporting tools – Power BI or Tableau Good problem-solving and analytical skills used to resolve technical problems. Must possess a good understanding of business requirements and IT strategies. Ability to work independently but must be a team player. Should be able to drive business decisions and take ownership of their work. Experience in presentation design, development, delivery, and good communication skills to present analytical results and recommendations for action-oriented data driven decisions and associated operational and financial impacts. Required/Desired Skills Cloud Platforms - Azure, Snowflake, Databricks, Delta lake (Required 3-4 years) RDBMS and Data Warehousing (Required 7-10 Years) SQL Programming and ETL (Required 7-10 Years) Unix/Linux shell scripting (Required 2-3 years) Power BI / Tableau (Desired 3 years) Python or any other programming language (Desired 4 years) Education & Qualifications University Degree in Computer Science and/or Analytics Minimum Experience required: 7-10 years in relational database design & development, ETL development Weekly Hours: 40 Time Type: Regular Location: Bangalore, Karnataka, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Description: Overall Purpose This position will interact on a consistent basis with other developers, architects, data product owners and source systems. This position requires multifaceted candidates who have experience in data analysis, visualization, good hands-on experience with BI Tools and relational databases, experience in data warehouse architecture (traditional and cloud). Key Roles and Responsibilities Develop, understand, and enhance code in traditional data warehouse environments, data lake, and cloud environments like Snowflake, Azure, Databricks Build new end-to-end business intelligence solutions. This includes data extraction, ETL processes applied on data to derive useful business insights, and best representing this data through dashboards. Write complex SQL queries used to transform data using Python/Unix shell scripting Understand business requirements and create visual reports and dashboards using Power BI or Tableau. Upskill to different technologies, understand existing products and programs in place Work with other development and operations teams. Flexible with shifts and occasional weekend support. Key Competencies Full life-cycle experience on enterprise software development projects. Experience in relational databases/ data marts/data warehouses and complex SQL programming. Extensive experience in ETL, shell or python scripting, data modelling, analysis, and preparation Experience in Unix/Linux system, files systems, shell scripting. Good to have knowledge on any cloud platforms like AWS, Azure, Snowflake, etc. Good to have experience in BI Reporting tools – Power BI or Tableau Good problem-solving and analytical skills used to resolve technical problems. Must possess a good understanding of business requirements and IT strategies. Ability to work independently but must be a team player. Should be able to drive business decisions and take ownership of their work. Experience in presentation design, development, delivery, and good communication skills to present analytical results and recommendations for action-oriented data driven decisions and associated operational and financial impacts. Required/Desired Skills Cloud Platforms - Azure, Snowflake, Databricks, Delta lake (Required 3-4 years) RDBMS and Data Warehousing (Required 7-10 Years) SQL Programming and ETL (Required 7-10 Years) Unix/Linux shell scripting (Required 2-3 years) Power BI / Tableau (Desired 3 years) Python or any other programming language (Desired 4 years) Education & Qualifications University Degree in Computer Science and/or Analytics Minimum Experience required: 7-10 years in relational database design & development, ETL development Weekly Hours: 40 Time Type: Regular Location: Bangalore, Karnataka, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-68650 Date posted 06/11/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Indeed logo

Ahmedabad,Gujarat Full Time Job Overview: We are looking for a skilled and experienced Data Engineer to join our team. The ideal candidate will have a strong background in Azure Data Factory, Databricks, Pyspark, Python , Azure SQL and other Azure cloud services, and will be responsible for building and managing scalable data pipelines, data lakes, and data warehouses . Experience with Azure Synapse Analytics, Microsoft Fabric or PowerBI will be considered a strong advantage. Key Responsibilities: Design, develop, and manage robust and scalable ETL/ELT pipelines using Azure Data Factory and Databricks Work with PySpark and Python to transform and process large datasets Build and maintain data lakes and data warehouses on Azure Cloud Collaborate with data architects, analysts, and stakeholders to gather and translate requirements into technical solutions Ensure data quality, consistency, and integrity across systems Optimize performance and cost of data pipelines and cloud infrastructure Implement best practices for security, governance, and monitoring of data pipelines Maintain and document data workflows and architecture Required Skills & Qualifications: 3–5 years of experience in Data Engineering Strong hands-on experience with: Azure Data Factory (ADF) Azure Databricks Azure SQL PySpark and Python Azure Storage (Blob, Data Lake Gen2) Hands-on experience with data warehouse/Lakehouse/data lake architecture Familiarity with Delta Lake, MLflow, and Unity Catalog is a plus Good understanding of SQL and performance tuning Knowledge of CI/CD in Azure for data pipelines Excellent problem-solving skills and ability to work independently Preferred Skills: Experience with Azure Synapse Analytics Familiarity with Microsoft Fabric Working knowledge of Power BI for data visualization and dashboarding Exposure to DevOps and infrastructure as code (IaC) in Azure Understanding of data governance and security best practices Databricks certification (e.g., Databricks Certified Data Engineer Associate/Professional)

Posted 1 week ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

- 10+ years of experience in Cloud Data Modernization, Architecture, Design and Implementation - Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience - Experience with data analytics IT platform implementation with hands-on implementation experience with data lake, modern data platforms and data warehouse. - Hands-on experience developing software code in one or more programming languages/frameworks such as Python, Spark, SQL etc. - Hands on experience leading large-scale full-cycle MPP enterprise data warehousing (EDW), data lake and analytics projects - Experience in doing discovery, assessment, migration/modernization estimation and planning for Data Platforms - Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience - Good knowledge on Compute, Storage, Security and Networking technologies - Good understanding and experience on dealing with firewalls, VPCs, network routing, Identity and Access Management and security implementation The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career- advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. - AWS experience preferred, with proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation) - AWS Professional level certifications (e.g., Solutions Architect Associate, Speciality or Professional) preferred - Experience with automation and scripting (e.g., Terraform, Python) - Knowledge of security and compliance standards (e.g., HIPAA, GDPR) - Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences - Experience with migrating mission critical data systems from one platform to another. - Experience with Cloud technologies like AWS, Databricks, Azure or Google Cloud technology. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

We are seeking a highly skilled and experienced Lead Data Engineer (7+ years) to join our dynamic team. As a Lead Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure. You will be responsible for ensuring the efficient and reliable collection, storage, and transformation of large-scale data to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Data Architecture & Design : Lead the design and implementation of robust data architectures that support data warehousing (DWH), data integration, and analytics platforms. Develop and maintain ETL (Extract, Transform, Load) pipelines to ensure the efficient processing of large datasets. ETL Development Design, develop, and optimize ETL processes using tools like Informatica Power Center, Intelligent Data Management Cloud (IDMC), or custom Python scripts. Implement data transformation and cleansing processes to ensure data quality and consistency across the enterprise. Data Warehouse Development Build and maintain scalable data warehouse solutions using Snowflake, Databricks, Redshift, or similar technologies. Ensure efficient storage, retrieval, and processing of structured and semi-structured data. Big Data & Cloud Technologies Utilize AWS Glue and PySpark for large-scale data processing and transformation. Implement and manage data pipelines using Apache Airflow for orchestration and scheduling. Leverage cloud platforms (AWS, Azure, GCP) for data storage, processing, and analytics. Data Management & Governance Establish and enforce data governance and security best practices. Ensure data integrity, accuracy, and availability across all data platforms. Implement monitoring and alerting systems to ensure data pipeline reliability. Collaboration & Leadership Work closely with data Stewards, analysts, and business stakeholders to understand data requirements and deliver solutions that meet business needs. Mentor and guide junior data engineers, fostering a culture of continuous learning and development within the team. Lead data-related projects from inception to delivery, ensuring alignment with business objectives and timelines. Database Management Design and manage relational databases (RDBMS) to support transactional and analytical workloads. Optimize SQL queries for performance and scalability across various database platforms. Required Skills & Qualifications Education: Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field. Experience Minimum of 7+ years of experience in data engineering, ETL, and data warehouse development. Proven experience with ETL tools like Informatica Power Center or IDMC. Strong proficiency in Python and PySpark for data processing. Experience with cloud-based data platforms such as AWS Glue, Snowflake, Databricks, or Redshift. Hands-on experience with SQL and RDBMS platforms (e.g., Oracle, MySQL, PostgreSQL). Familiarity with data orchestration tools like Apache Airflow. Technical Skills Advanced knowledge of data warehousing concepts and best practices. Strong understanding of data modeling, schema design, and data governance. Proficiency in designing and implementing scalable ETL pipelines. Experience with cloud infrastructure (AWS, Azure, GCP) for data storage and processing. Soft Skills Excellent communication and collaboration skills. Ability to lead and mentor a team of engineers. Strong problem-solving and analytical thinking abilities. Ability to manage multiple projects and prioritize tasks effectively. Preferred Qualifications Experience with machine learning workflows and data science tools. Certification in AWS, Snowflake, Databricks, or relevant data engineering technologies. Experience with Agile methodologies and DevOps practices. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bhopal, Madhya Pradesh, India

On-site

Linkedin logo

Responsibilities Establish scalable, efficient, automated processes for data analysis, data model development, validation, and implementation. Work closely with analysts/data scientists to understand impact to the downstream data models. Write efficient and well-organized software to ship products in an iterative, continual release environment. Contribute to and promote good software engineering practices across the team Communicate clearly and effectively to technical and non-technical audiences. Minimum Qualifications University or advanced degree in engineering, computer science, mathematics, or a related field Strong hands-on experience in Databricks using PySpark and Spark SQL (Unity Catalog, workflows, Optimization techniques) Experience with at least one cloud provider solution (GCP preferred) Strong experience working with relational SQL databases. Strong experience with object-oriented/object function scripting language : Python. Working knowledge in any transformation tools, DBT preferred. Ability to work with Linux platform. Strong knowledge of data pipeline and workflow management tools (Airflow) Working knowledge of Git hub /Git Toolkit Expertise in standard software engineering methodology, e.g. unit testing, code reviews, design documentation Experience creating Data pipelines that prepare data for ingestion & consumption appropriately. Experience in maintaining and optimizing databases/filesystems for production usage in reporting, analytics. Working in a collaborative environment and interacting effectively with technical and non-technical team members equally well. Good verbal and written communication skill (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

45.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role We are looking for a dynamic and hands-on Data Analytics & AI professional to join our growing team. The ideal candidate will have a strong foundation in SQL and Python, experience with modern data engineering practices, and the ability to lead analytics initiatives while guiding junior team members. This role requires someone who can work in a fast-paced, evolving environment, handle ambiguity, and work directly with leadership to drive data-informed decisions. Key Responsibilities Design, build, and maintain data pipelines and analytical workflows following data engineering best practices. Work on exploratory data analysis, business intelligence, and AI/ML-oriented problem-solving. Use data visualization tools to build dashboards and reports for decision-making. Collaborate directly with stakeholders and leadership to understand and translate business requirements into analytics and data solutions. Mentor and support junior analysts in the team. Proactively adapt to changing business needs and adjust analytical approaches accordingly. Be available for PST time zone meetings while working in the IST time zone. Required Skills And Experience 45 years of experience in data analytics, data engineering, or related fields. Strong proficiency in SQL and Python for data manipulation and analysis. Hands-on experience in designing and building data pipelines and working with large-scale datasets. Familiarity with data engineering best practices like modular code, documentation, version control(GIT), and testing Experience with at least one data visualization tool (Tableau, Redash, Power BI, etc.). Excellent communication skills and ability to work directly with leadership and cross-functional teams. Ability to thrive in dynamic and fast-changing environments with minimal supervision. Strong analytical thinking with a bias for action and outcomes. Preferred/Good To Have Experience with tools such as Tableau, Redash, and Databricks. Background in working with e-commerce or customer-focused data products or companies. Exposure to cloud data platforms (e.g., AWS, GCP, Azure) Experience working with ambiguous problem statements and hypothesis-driven analysis. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

Job Summary We are seeking a highly skilled and motivated .NET Full Stack Developer to join our dynamic team. The ideal candidate will possess a strong understanding of the entire software development lifecycle, from design and development to deployment and maintenance. You will be responsible for building, maintaining, and enhancing high-quality, scalable, and user-friendly web applications using the latest .NET : Design, develop, and implement complex web applications using .NET 8, C#, and Azure. Develop and maintain robust and scalable APIs (RESTful and gRPC) for frontend consumption. Collaborate with cross-functional teams (UX/UI designers, product managers, QA) to deliver exceptional user experiences. Develop and implement frontend applications using React JS and Tailwind CSS. Integrate with various data sources, including Azure SQL, and leverage data technologies like Python, PowerBI, and Databricks for data analysis and visualization. Develop and maintain mobile applications using React Native. Experience with CMS platforms like Sitecore, Tridion, or WordPress is a plus. Participate in all phases of the software development lifecycle, including requirements gathering, design, development, testing, and deployment. Write clean, well-documented, and maintainable code. Troubleshoot and debug issues effectively. Stay up-to-date with the latest technologies and industry best practices. Contribute to the continuous improvement of our development processes and : Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of professional experience in .NET development. Strong proficiency in .NET 8, C#, and Azure. Expertise in frontend development using React JS and Tailwind CSS. Experience with data technologies like Python, PowerBI, and Databricks is a plus. Experience with mobile development using React Native is a plus. Experience with CMS platforms like Sitecore, Tridion, or WordPress is a plus. Experience working in an Agile/Scrum environment. Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Ability to work independently and as part of a team. Passion for learning and a strong work Points : Experience in the Logistics domain. Experience with DevOps practices and tools. Contributions to open-source projects (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 - 6.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Linkedin logo

Job Title : Data Engineer - Data Fabric/Bricks Location : Remote Job Summary We are seeking an experienced Data Engineer to join our team as contractor. The ideal candidate will have a strong background in designing, implementing, and managing large-scale data platforms using Data Fabric or Data Bricks. The candidate should have experience working with both structured and unstructured data and have successfully implemented at least one full lifecycle of a data platform. Key Responsibilities Design, develop, and deploy large-scale data platforms using Data Fabric or Data Bricks Work with cross-functional teams to identify and prioritize data requirements Develop and maintain data pipelines for structured and unstructured data Implement data governance and quality control measures Collaborate with data scientists and analysts to integrate data into analytics workflows Optimize data platform performance, scalability, and reliability Develop and maintain technical documentation for data platforms Requirements 5-6 years of experience in data engineering, with a focus on Data Fabric or Data Bricks Experience with at least one full lifecycle of data platform implementation Strong knowledge of data architecture, data modeling, and data governance Experience working with both structured and unstructured data Proficiency in Scala, Python, and SQL Experience with Apache Spark, data processing frameworks, and data governance tools Strong understanding of data security, compliance, and regulatory requirements Excellent communication and collaboration skills - Experience with cloud-based data platforms such as Azure or AWS- Nice To Have Knowledge of machine learning and deep learning frameworks Experience with data visualization tools such as Tableau, Power BI, or D3.js Certification in Data Engineering or related field (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Responsibilities Proficiency in Azure Data Factory, Azure Databricks (including Spark and Delta Lake), and other Azure data services. Strong programming skills in Python, with experience in data processing libraries such as Pandas, PySpark, and NumPy. Experience with SQL and relational databases, as well as NoSQL databases. Familiarity with data warehousing concepts and tools (e.g., Azure Synapse Analytics). Proven experience as an Azure Data Engineer or similar role. Strong proficiency in Azure Databricks, including Spark and Delta Lake. Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. Proficiency in data integration and ETL processes and T-SQL. Proficiency in Data Warehousing concepts and Python (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies