Home
Jobs

3417 Databricks Jobs - Page 6

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Data Engineer Location: Bengaluru L&T Technology Services is seeking a Data Engineer (Experience range - 9+ years) of experience, proficient in: 9+ years relevant data engineering hands on work experience- data ingestion, processing, exploratory analysis to build solutions that deliver value through data as an asset. Data engineer build ,test and deploy data pipelines efficiently and reliably move data across systems and should be top of latest architectural trends on AZURE cloud. Folks who understand parallel and distributed processing, storage, concurrency, fault tolerant systems. Folks who thrive on new technologies, able to adapt and learn easily to meet the needs of next generation engineering challenges. Technical Skills (Must-Have) Applied experience with distributed data processing frameworks - Spark , Databricks with Python and SQL Must have worked at least 2 end-end data analytics projects with Databricks Configuration , Unity Catalog, Delta Sharing and medallion architecture. Applied experience with Azure Data services ADLS , Delta Required Skills: Azure Data Lake Storage (ADLS), Advanced SQL and Python Programming, Databricks Expertise with Medallion Architecture, Data Governance and Security, #AzureDataEngineer, #AzureCloud, #AzureDatabricks, #AzureDataLake, #AzureSynapse, #AzureDataFactory, #AzureSQL, #Databricks, #DataEngineering, #Python, #Flask Show more Show less

Posted 1 day ago

Apply

162.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Area(s) of responsibility About Birlasoft Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job –The Azure Data Architect is responsible for designing, implementing and managing scalable and secure data solutions on Microsoft Azure cloud platform. This role requires a deep understanding of data transformation, data cleansing, data profiling, data architecture principles, cloud technologies and data engineering practices helping build an optimal data ecosystem for performance, scalability and cost efficiency. Job Title - Azure Data Factory Architect Location: Noida/Pune Educational Background: Bachelor’s degree in computer science, Information Technology, or related field. Mode of Work- Hybrid Experience Required - 14+ years Mandatory skills Key Responsibilities Solution Design: Design and implement robust, scalable and secure data architectures on Azure Define end-to-end data solutions, including data ingestion, storage, transformation, processing, and analytics. Understand business requirements and translate them to technical solutions Azure Platform Expertise: Leverage Azure services like Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks, Azure Cosmos DB, and Azure SQL Database. Knowledge on optimization and cost of Azure Data Solutions Data Integration and ETL/ELT pipelines: Design and implement data pipeline for real-time and batch processing SQL skills to write complex queries Must have knowledge in establishing a one way or two-way communication channels while integration between various systems. Data Governance and Security: Implement data security in line with organizational and regulatory requirements. Implement data quality assurance. Should have knowledge in different Authentication methods used in Cloud solutions Performance Optimization: Monitor, troubleshoot and improve data solution performance Implement best practice for data solutions Collaboration and Leadership: Provide technical leadership and mentorship to team members Mandatory Skills Required Hands on experience in Azure services like Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage Gen2, Azure Keyvault Services, Azure SQL Database, Azure Databricks. Hands on experience in data migration/data transformation. Data cleansings. Data profiling Experience in Logic Apps Soft Skills Communicates effectively Problem solving – analytical skills Adapt evolving technologies Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 1 day ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Pocket Entertainment Pocket Entertainment is revolutionizing entertainment through immersive storytelling. With millions of users worldwide, we're building the future of entertainment — blending human creativity with cutting-edge AI. Now, we’re expanding our global footprint. Let’s reimagine global entertainment — powered by AI and fueled by imagination. Work on category-defining AI products reshaping entertainment that operate at massive scale: millions of users, billions of minutes consumed. Collaborate with some of the best engineers, AI researchers, and creators globally. Mission of the role As a Senior Principal Data Scientist at Pocket FM, you will play a pivotal role in driving innovation in recommendation systems , user personalization , and content strategy . You will lead high-impact initiatives to understand user behavior, predict preferences, and enhance engagement through intelligent content delivery. This is a hands-on leadership role where you will set the technical vision, guide the roadmap, influence strategy, and mentor a high-performing data science team. Key responsibilities: Design and deploy machine learning models for recommendations, personalization, ranking, and user behavior prediction Tackle high-impact challenges across personalization, churn prediction, LTV modeling, creator analytics, and platform growth Partner with engineering, product, and content teams to align modeling efforts with business impact Lead the full ML lifecycle: prototyping, experimentation, A/B testing, deployment, and monitoring Translate large-scale user behavior data into actionable insights and strategies for content discovery and retention Mentor and guide junior and senior data scientists, fostering a culture of technical excellence, innovation, and continuous learning Conduct in-depth data analysis and exploratory research to uncover actionable insights, understand user trends, and identify new opportunities for product improvement and growth Drive MLOps best practices including model monitoring, versioning, and lifecycle management Stay current with advances in ML/NLP and apply them to improve recommendation quality and user satisfaction Qualifications: Advanced degree (PhD or Master’s) in CS, ML, Stats, or related field 12+ years of experience building and deploying ML models in production Expertise in recommender systems, personalization, ranking, or user modeling Strong Python skills and deep experience with ML frameworks (e.g., PyTorch, TensorFlow, XGBoost) Solid grounding in experimentation, A/B testing, and statistical inference Experience with big data tools (Spark, Databricks) and cloud platforms (AWS/GCP) Strong analytical and creative problem-solving skills Strong communication and cross-functional collaboration skills Bonus Points: Experience with NLP techniques applied to text or audio data Contributions to open-source ML projects or research publications Familiarity with generative AI models and their applications Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Engineer Job Type: Full-Time Location: On-site Hyderabad, Telangana, India Job Summary: We are seeking an accomplished Data Engineer to join one of our top customer's dynamic team in Hyderabad. You will be instrumental in designing, implementing, and optimizing data pipelines that drive our business insights and analytics. If you are passionate about harnessing the power of big data, possess a strong technical skill set, and thrive in a collaborative environment, we would love to hear from you. Key Responsibilities: Develop and maintain scalable data pipelines using Python, PySpark, and SQL. Implement robust data warehousing and data lake architectures. Leverage the Databricks platform to enhance data processing and analytics capabilities. Model, design, and optimize complex database schemas. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Lead and mentor junior data engineers and establish best practices. Troubleshoot and resolve data processing issues promptly. Required Skills and Qualifications: Strong proficiency in Python and PySpark. Extensive experience with the Databricks platform. Advanced SQL and data modeling skills. Demonstrated experience in data warehousing and data lake architectures. Exceptional problem-solving and analytical skills. Strong written and verbal communication skills. Preferred Qualifications: Experience with graph databases, particularly MarkLogic. Proven track record of leading data engineering teams. Understanding of data governance and best practices in data management. Show more Show less

Posted 1 day ago

Apply

9.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!!! TCS is hiring for Big Data Architect Location - PAN India Years of Experience - 9-14 years Job Description- Experience with Python, Spark, and Hive data pipelines using ETL processes Apache Hadoop development and implementation Experience with streaming frameworks such as Kafka Hands on experience in Azure/AWS/Google data services Work with big data technologies (Spark, Hadoop, BigQuery, Databricks) for data preprocessing and feature engineering. Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer – Azure & API Development Location: Remote Experience Required: 7+ Years Job Summary: We are looking for an experienced Data Engineer with strong expertise in Azure cloud architecture , API development , and modern data engineering tools . The ideal candidate will have in-depth experience in building and maintaining scalable data pipelines and API integrations using Azure services like Azure Data Factory (ADF) , Databricks , Azure Functions , and Service Bus , along with infrastructure provisioning using Terraform . Key Responsibilities: Design and implement scalable, secure, and high-performance data solutions on Azure . Develop, deploy, and manage RESTful APIs to support data access and integration. Build and maintain ETL/ELT data pipelines using Azure Data Factory , Databricks , and Azure Functions . Integrate data workflows with Azure Service Bus and other messaging services. Define and implement cloud infrastructure using Terraform and Infrastructure-as-Code (IaC) best practices. Collaborate with stakeholders to understand data requirements and develop technical solutions. Ensure best practices for data governance , security , monitoring , and performance optimization . Work closely with DevOps and Data Architects to implement CI/CD pipelines and production-grade deployments. Must-Have Skills: 7+ years of professional experience in Data Engineering or related roles. Strong hands-on experience with Azure services , particularly: Azure Data Factory (ADF) Databricks (Spark-based processing) Azure Functions Azure Service Bus Proficient in API development (RESTful APIs using Python, .NET, or Node.js). Good command over SQL , Spark SQL , and data transformation techniques. Experience with Terraform for IaC and provisioning Azure resources. Excellent understanding of data architecture , cloud security , and governance models . Strong problem-solving skills and experience working in Agile environments. Preferred Skills: Familiarity with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins. Exposure to event-driven architecture and real-time data streaming. Knowledge of containerization (Docker/Kubernetes) is a plus. Experience in performance tuning and cost optimization in Azure environments. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Required Skills: YOE-8+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins) Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development. Responsibilities Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management. Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting. Work with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts. Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time. Documentation : Document ticket resolutions, testing protocols, and data validation processes. Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers. Ticket Management: Monitor the Jira ticket queue and respond to tickets as they are raised. Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them. Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues. Troubleshooting and Support: Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics. Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance. Desired Skills & Requirements Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit. Our ideal candidate possesses the following attributes and qualifications: Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments. Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions. Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management. Hands-on experience with PySpark for data processing and automation. Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments. Some experience with Azure DevOps CI/CD IaC and release pipelines. Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills. Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement. Experience with Data Engineering in Microsoft Fabric Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks). Certifications in Azure Data Engineering. Why Join Us? Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless. Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success. Enjoy the flexibility to work from anywhere Work-life balance that suits your lifestyle. Competitive salary and comprehensive benefits package. Career growth and professional development opportunities. A collaborative and inclusive work culture. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Aramya, we’re redefining fashion for India’s underserved Gen X/Y women, offering size-inclusive, comfortable, and stylish ethnic wear at affordable prices. Launched in 2024, we’ve already achieved ₹40 Cr in revenue in our first year, driven by a unique blend of data-driven design, in-house manufacturing, and a proprietary supply chain. Today, with an ARR of ₹100 Cr, we’re scaling rapidly with ambitious growth plans for the future. Our vision is bold to build the most loved fashion and lifestyle brands across the world while empowering individuals to express themselves effortlessly. Backed by marquee investors like Accel and Z47, we’re on a mission to make high-quality ethnic wear accessible to every woman. We’ve built a community of loyal customers who love our weekly design launches, impeccable quality, and value-for-money offerings. With a fast-moving team driven by creativity, technology, and customer obsession, Aramya is more than a fashion brand—it’s a movement to celebrate every woman’s unique journey. We’re looking for a passionate Data Engineer with a strong foundation. The ideal candidate should have a solid understanding of D2C or e-commerce platforms and be able to work across the stack to build high-performing, user-centric digital experiences. Key Responsibilities Design, build, and maintain scalable ETL/ELT pipelines using tools like Apache Airflow, Databricks , and Spark . Own and manage data lakes/warehouses on AWS Redshift (or Snowflake/BigQuery). Optimize SQL queries and data models for analytics, performance, and reliability. Develop and maintain backend APIs using Python (FastAPI/Django/Flask) or Node.js . Integrate external data sources (APIs, SFTP, third-party connectors) and ensure data quality & validation. Implement monitoring, logging, and alerting for data pipeline health. Collaborate with stakeholders to gather requirements and define data contracts. Maintain infrastructure-as-code (Terraform/CDK) for data workflows and services. Must-Have Skills Strong in SQL and data modeling (OLTP and OLAP). Solid programming experience in Python , preferably for both ETL and backend. Hands-on experience with Databricks , Redshift , or Spark . Experience with building and managing ETL pipelines using tools like Airflow , dbt , or similar. Deep understanding of REST APIs , microservices architecture, and backend design patterns. Familiarity with Docker , Git, CI/CD pipelines. Good grasp of cloud platforms (preferably AWS ) and services like S3, Lambda, ECS/Fargate, CloudWatch. Nice-to-Have Skills Exposure to streaming platforms like Kafka, Kinesis, or Flink. Experience with Snowflake , BigQuery , or Delta Lake . Proficient in data governance , security best practices, and PII handling. Familiarity with GraphQL , gRPC , or event-driven systems. Knowledge of data observability tools (Monte Carlo, Great Expectations, Datafold). Experience working in a D2C/eCommerce or analytics-heavy product environment. Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

We are looking for a Senior Data Lead to lead enterprise-level data modernization and innovation. In this highly strategic role, you will design scalable, secure, and future-ready data architectures, modernize legacy systems, and provide trusted technical leadership across both technology and business teams. This is a unique opportunity to make a company-wide impact by influencing data strategy and enabling smarter, faster decision-making through data. Key Responsibilities Architect & Design: Lead the development of robust, scalable data models, data management systems, and integration frameworks to ensure enterprise-wide data accuracy, consistency, and security. Domain Expertise: Act as a subject matter expert across key business functions such as Supply Chain, Product Engineering, Sales & Marketing, Manufacturing, Finance, and Legal. Modernization Leadership: Drive the transformation of legacy systems and manage end-to-end cloud migrations with minimal business disruption. Collaboration: Partner with data engineers, scientists, analysts, and IT leaders to build high-performance, scalable data pipelines and transformation solutions. Governance & Compliance: Establish and maintain data governance frameworks including metadata repositories, data dictionaries, and data lineage documentation. Strategic Advisory: Provide guidance on data architecture best practices, technology selection, and roadmap alignment to senior leadership and cross-functional teams. Mentorship: Serve as a mentor and thought leader to junior data professionals, fostering a culture of innovation, knowledge sharing, and technical excellence. Innovation & Trends: Stay abreast of emerging technologies in cloud, data platforms, and AI/ML to identify and implement innovative solutions. Communication: Translate complex technical concepts into clear, actionable insights for technical and non-technical audiences alike. Required Qualifications 10+ years of experience in data architecture, engineering, or enterprise data management roles. Demonstrated success leading large-scale data initiatives in life sciences or other highly regulated industries. Deep expertise in modern data architecture paradigms such as Data Lakehouse, Data Mesh, or Data Fabric. Strong hands-on experience with cloud platforms like AWS, Azure, or Google Cloud Platform (GCP). Proficiency in data modeling, ETL/ELT frameworks, and enterprise integration patterns. Deep understanding of data governance, metadata management, master data management (MDM), and data quality practices. Experience with tools and platforms including but not limited to: Data Integration: Informatica, Talend Data Governance: Collibra Modeling/Transformation: dbt Cloud Platforms: Snowflake, Databricks Excellent problem-solving skills with the ability to translate business requirements into scalable data solutions. Exceptional communication skills and experience engaging with both executive stakeholders and engineering teams. Preferred Qualifications (Nice to Have) Experience with AI/ML data pipelines or real-time streaming architectures. Certifications in cloud technologies (e.g., AWS Certified Solutions Architect, Azure Data Engineer). Familiarity with regulatory frameworks such as GxP, HIPAA, or GDPR. Show more Show less

Posted 1 day ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Summary: IT - Lead Architect/Associate Principle – Azure Lake - D&IT DATA This profile will lead a team of architects and engineers to focus on Strategic Azure architecture and AI projects. Job Responsibility: Strategic Data Architecture and Roadmap: Develop and maintain the company’s data architecture strategy aligned with business objectives. Lead design and/or architecture validation reviews with all stakeholders, assess projects against architectural principles and target solutions, organize and lead architecture committees. Select new solutions that meet business needs, aligned with existing recommendations and solutions, and broadly with IS strategy. Model the company’s information systems and business processes. Define a clear roadmap to modernize and optimize data management practices and technologies. Emerging Technologies and Innovation: Drive the adoption of new technologies (AL/ML) and assess their impact on the organization’s data strategy. Conduct technological watch in both company activity domains and IT technologies, promoting innovative solutions adapted to the company. Define principles, standards, and tools for system modeling and process management. Platform Design and Implementation: Architect scalable data flows, storage solutions, and analytics platforms in cloud and hybrid environments. Ensure secure, high-performing, and cost-effective data solutions. Data Governance and Quality: Establish data governance frameworks ensuring data accuracy, availability, and security. Promote and enforce best practices for data quality management. Ensure compliance with enterprise architecture standards and principles. Technical Leadership: Act as a technical advisor on complex data-related projects and proof of concepts. Stakeholder Collaboration: Collaborate with business stakeholders to understand data needs and translate them into architectural solutions. Work with relevant stakeholders in defining project scope, planning development, and validating milestones throughout project execution. Exposure to a wide range of technologies related to Datalakes SQL, SYNAPSE, Databricks, PowerBI, Fabric, Python Tools: Visual Studio & TFS, GIT. Database: SQL Server. NoSQL Methodologies: Agile (SCRUM). SAP BW / SAC Required Skill: Expert in Azure, Databricks and Synapse Proven experience leading technical teams and strategic projects. Deep knowledge of cloud data platforms (Microsoft Azure, Fabric, Databricks, or AWS). Proven experience in designing and implementing AI solutions within data architectures. Understanding of SAP-based technologies (SAP BW, SAP DataSphere, HANA, S/4, ECC). Experience with analytics, visualization, reporting, and self-service tools (Power BI, Tableau, SAP Analytics Cloud). Expert understanding of data modeling, ETL/ELT technologies, and big data. Experience with relational and NoSQL databases. Deep knowledge of data security and compliance best practices Strong experience in Solution Architecture. Proven ability to lead AI/ML projects from conception to deployment. Familiarity with data mesh and data fabric architectural approaches. Qualification and Experience: Experience – 10-12 years Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Experience in data architecture, with at least 5 years in a leadership role. Experience with AI/ML projects. Certifications in data architecture or cloud technologies, project management. 5-year experience in AI model design & deployment Excellent communication and presentation skills for both technical and non-technical audiences. Strong problem-solving skills, stakeholder management, and the ability to navigate complexity. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Backend Developer - Python Job Type: Full-time Location: On-site, Hyderabad, Telangana, India Job Summary: Join one of our top customer's team as a Backend Developer and help drive scalable, high-performance solutions at the intersection of machine learning and data engineering. You’ll collaborate with skilled professionals to design, implement, and maintain backend systems powering advanced AI/ML applications in a dynamic, onsite environment. Key Responsibilities: Develop, test, and deploy robust backend components and microservices using Python and PySpark. Implement and optimize data pipelines leveraging Databricks and distributed computing frameworks. Design and maintain efficient databases with MySQL, ensuring data integrity and high availability. Integrate machine learning models into production-ready backend systems supporting AI-driven features. Collaborate closely with data scientists and engineers to deliver end-to-end solutions aligned with business goals. Monitor, troubleshoot, and enhance system performance, utilizing Redis for caching and improved scalability. Write clear and maintainable documentation, and communicate effectively with team members both verbally and in writing. Required Skills and Qualifications: Proficiency in Python programming for backend development. Hands-on experience with Databricks and PySpark in a production environment. Strong understanding of MySQL database design, querying, and performance tuning. Practical background in machine learning concepts and deploying ML models. Experience with Redis for caching and state management. Excellent written and verbal communication skills, with a keen attention to detail. Demonstrated ability to work effectively in an on-site, collaborative setting in Hyderabad. Preferred Qualifications: Previous experience in high-growth AI/ML or data engineering projects. Familiarity with additional backend technologies or cloud platforms. Demonstrated leadership or mentorship in technical teams. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Job Title: Azure Databricks Engineer Experience: 4+ Years Required Skills: 4+ years of experience in Data Engineering . Strong hands-on experience with Azure Databricks and PySpark . Good understanding of Azure Data Factory (ADF) , Azure Data Lake (ADLS) , and Azure Synapse . Strong SQL skills and experience with large-scale data processing. Experience with version control systems (Git), CI/CD pipelines, and Agile methodology. Knowledge of Delta Lake, Lakehouse architecture, and distributed computing concepts. Preferred Skills: Experience with Airflow , Power BI , or machine learning pipelines . Familiarity with DevOps tools for automation and deployment in Azure. Azure certifications (e.g., DP-203) are a plus. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Job Title: Backend Developer Job Type: Full-time Location: On-site, Hyderabad, Telangana, India About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: • Develop, test, and maintain scalable backend components and microservices using Python and PySpark. • Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. • Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. • Integrate machine learning models into production-grade backend systems powering innovative AI features. • Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. • Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. • Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: • Proficient in Python for backend development with strong coding standards. • Practical experience with Databricks and PySpark in live production environments. • Advanced knowledge of MySQL database design, query optimization, and maintenance. • Solid foundation in machine learning concepts and deploying ML models in backend systems. • Experience utilizing Redis for effective caching and state management. • Outstanding written and verbal communication abilities with strong attention to detail. • Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: • Background in high-growth AI/ML or complex data engineering projects. • Familiarity with additional backend technologies or cloud-based platforms. • Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Job Title: Backend Developer Job Type: Full-time Location: On-site, Hyderabad, Telangana, India About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: • Develop, test, and maintain scalable backend components and microservices using Python and PySpark. • Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. • Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. • Integrate machine learning models into production-grade backend systems powering innovative AI features. • Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. • Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. • Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: • Proficient in Python for backend development with strong coding standards. • Practical experience with Databricks and PySpark in live production environments. • Advanced knowledge of MySQL database design, query optimization, and maintenance. • Solid foundation in machine learning concepts and deploying ML models in backend systems. • Experience utilizing Redis for effective caching and state management. • Outstanding written and verbal communication abilities with strong attention to detail. • Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: • Background in high-growth AI/ML or complex data engineering projects. • Familiarity with additional backend technologies or cloud-based platforms. • Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra

Remote

Indeed logo

Solution Engineer - Data & AI Mumbai, Maharashtra, India Date posted Jun 16, 2025 Job number 1830869 Work site Up to 50% work from home Travel 25-50 % Role type Individual Contributor Profession Technology Sales Discipline Technology Specialists Employment type Full-Time Overview As a Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. OR 5+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and competitors (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. Responsibilities Drive technical sales with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana

Remote

Indeed logo

Solution Engineer - Cloud & Data AI Gurgaon, Haryana, India Date posted Jun 16, 2025 Job number 1830866 Work site Up to 50% work from home Travel 25-50 % Role type Individual Contributor Profession Technology Sales Discipline Technology Specialists Employment type Full-Time Overview As a Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Preffered 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. Or 5+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and other cloud products (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. Responsibilities Drive technical conversations with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

SalesForce (SF)Pune Posted On 16 Jun 2025 End Date 31 Dec 2025 Required Experience 8 - 13 Years Basic Section Grade Role Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice SalesForce (SF) Organization Unit SalesForce (SF) Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill SALESFORCE Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION No data available Working Language No data available Job Description Responsibilities: Salesforce Architecture & Strategy: Design and implement enterprise-level Salesforce architecture aligned with business objectives and industry best practices Develop comprehensive solution roadmaps for Salesforce platform evolution and optimization Provide expert architectural guidance on complex Salesforce implementations, customizations, and integrations Lead strategic initiatives to maximize the value and capabilities of our Salesforce ecosystem Establish governance frameworks, development standards, and architectural principles for sustainable growth Advanced Salesforce Development & Administration: Design and implement sophisticated solutions using Apex, Lightning Web Components, SOQL, SOSL, and other Salesforce technologies Architect complex configurations and customizations across multiple Salesforce clouds (Sales Cloud, Service Cloud, CPQ, etc.) Develop and maintain scalable data models, security frameworks, and system integrations Create advanced automation solutions using Flow, Process Builder, and Apex triggers Ensure platform stability, performance optimization, and adherence to Salesforce best practices Sales & Operations Expertise: Partner with sales and operations leadership to translate complex business requirements into effective Salesforce solutions Design and implement advanced CPQ solutions to support complex pricing, quoting, and proposal processes specific to energy efficiency services Architect SiteTracker implementations to optimize field operations, project management, and service delivery Develop sophisticated sales and operations dashboards, reports, and analytics to drive data-informed decision making Ensure Salesforce solutions support end-to-end business processes from lead generation through project implementation and ongoing service AI & Automation Implementation: Lead the implementation of Einstein Analytics, Einstein Discovery, and other AI-powered Salesforce capabilities Design intelligent automation solutions to streamline sales processes, operations workflows, and customer service Develop predictive models and data-driven insights to enhance forecasting, opportunity management, and customer success Architect solutions leveraging AI to optimize energy efficiency project planning and implementation Stay at the forefront of emerging Salesforce AI technologies and identify applications relevant to our industry Integration & System Architecture: Design and implement enterprise-grade integrations between Salesforce and other critical business systems Architect data flows between Salesforce and analytics platforms, including Databricks, to enable advanced business intelligence Develop integration strategies that ensure data consistency, accuracy, and timeliness across the technology ecosystem Create scalable integration frameworks that support real-time data exchange and business process automation Ensure seamless integration between SiteTracker and other components of the Salesforce ecosystem Leadership & Knowledge Transfer: Serve as the principal Salesforce technical authority for the organization Mentor and guide development and administration teams on Salesforce best practices Lead technical discussions with stakeholders at all levels of the organization Provide thought leadership on industry trends and emerging Salesforce capabilities Drive innovation and continuous improvement of Salesforce solutions Qualifications: Required Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field Minimum of 8+ years of experience with Salesforce, including at least 5 years in architectural or senior development roles Advanced expertise in Salesforce administration, development, and solution architecture Extensive experience with multiple Salesforce clouds, including Sales Cloud, Service Cloud, and CPQ Demonstrated experience with SiteTracker implementation and optimization Strong understanding of energy efficiency, sustainability, or related industries (utilities, energy services, etc.) Expert-level knowledge of Apex, Lightning Web Components, SOQL, and Salesforce integration patterns Experience designing and implementing complex business processes using Flow, Process Builder, and custom development Proven ability to architect enterprise-grade solutions that address sophisticated business requirements Strong technical leadership skills with experience mentoring development teams Excellent communication skills and ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications: Multiple Salesforce certifications, including Salesforce Certified Technical Architect or System Architect Experience with Databricks or similar data analytics platforms and their integration with Salesforce Knowledge of AI/ML technologies and their application within the Salesforce ecosystem Experience in energy efficiency measurement and verification, sustainability reporting, or energy management systems Experience with Field Service Lightning and its application to energy efficiency project management Expertise in Salesforce DevOps practices, including CI/CD, version control, and release management Experience with Mulesoft, Dell Boomi, or other enterprise integration platforms Knowledge of advanced analytics and business intelligence tools that complement Salesforce Experience with agile development methodologies in an enterprise environment Background in implementing Salesforce solutions for companies with similar business models to

Posted 1 day ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

SalesForce (SF)Pune Posted On 16 Jun 2025 End Date 31 Dec 2025 Required Experience 8 - 13 Years Basic Section Grade Role Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice SalesForce (SF) Organization Unit SalesForce (SF) Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill SALESFORCE Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION No data available Working Language No data available Job Description Responsibilities: Salesforce Architecture & Strategy: Design and implement enterprise-level Salesforce architecture aligned with business objectives and industry best practices Develop comprehensive solution roadmaps for Salesforce platform evolution and optimization Provide expert architectural guidance on complex Salesforce implementations, customizations, and integrations Lead strategic initiatives to maximize the value and capabilities of our Salesforce ecosystem Establish governance frameworks, development standards, and architectural principles for sustainable growth Advanced Salesforce Development & Administration: Design and implement sophisticated solutions using Apex, Lightning Web Components, SOQL, SOSL, and other Salesforce technologies Architect complex configurations and customizations across multiple Salesforce clouds (Sales Cloud, Service Cloud, CPQ, etc.) Develop and maintain scalable data models, security frameworks, and system integrations Create advanced automation solutions using Flow, Process Builder, and Apex triggers Ensure platform stability, performance optimization, and adherence to Salesforce best practices Sales & Operations Expertise: Partner with sales and operations leadership to translate complex business requirements into effective Salesforce solutions Design and implement advanced CPQ solutions to support complex pricing, quoting, and proposal processes specific to energy efficiency services Architect SiteTracker implementations to optimize field operations, project management, and service delivery Develop sophisticated sales and operations dashboards, reports, and analytics to drive data-informed decision making Ensure Salesforce solutions support end-to-end business processes from lead generation through project implementation and ongoing service AI & Automation Implementation: Lead the implementation of Einstein Analytics, Einstein Discovery, and other AI-powered Salesforce capabilities Design intelligent automation solutions to streamline sales processes, operations workflows, and customer service Develop predictive models and data-driven insights to enhance forecasting, opportunity management, and customer success Architect solutions leveraging AI to optimize energy efficiency project planning and implementation Stay at the forefront of emerging Salesforce AI technologies and identify applications relevant to our industry Integration & System Architecture: Design and implement enterprise-grade integrations between Salesforce and other critical business systems Architect data flows between Salesforce and analytics platforms, including Databricks, to enable advanced business intelligence Develop integration strategies that ensure data consistency, accuracy, and timeliness across the technology ecosystem Create scalable integration frameworks that support real-time data exchange and business process automation Ensure seamless integration between SiteTracker and other components of the Salesforce ecosystem Leadership & Knowledge Transfer: Serve as the principal Salesforce technical authority for the organization Mentor and guide development and administration teams on Salesforce best practices Lead technical discussions with stakeholders at all levels of the organization Provide thought leadership on industry trends and emerging Salesforce capabilities Drive innovation and continuous improvement of Salesforce solutions Qualifications: Required Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field Minimum of 8+ years of experience with Salesforce, including at least 5 years in architectural or senior development roles Advanced expertise in Salesforce administration, development, and solution architecture Extensive experience with multiple Salesforce clouds, including Sales Cloud, Service Cloud, and CPQ Demonstrated experience with SiteTracker implementation and optimization Strong understanding of energy efficiency, sustainability, or related industries (utilities, energy services, etc.) Expert-level knowledge of Apex, Lightning Web Components, SOQL, and Salesforce integration patterns Experience designing and implementing complex business processes using Flow, Process Builder, and custom development Proven ability to architect enterprise-grade solutions that address sophisticated business requirements Strong technical leadership skills with experience mentoring development teams Excellent communication skills and ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications: Multiple Salesforce certifications, including Salesforce Certified Technical Architect or System Architect Experience with Databricks or similar data analytics platforms and their integration with Salesforce Knowledge of AI/ML technologies and their application within the Salesforce ecosystem Experience in energy efficiency measurement and verification, sustainability reporting, or energy management systems Experience with Field Service Lightning and its application to energy efficiency project management Expertise in Salesforce DevOps practices, including CI/CD, version control, and release management Experience with Mulesoft, Dell Boomi, or other enterprise integration platforms Knowledge of advanced analytics and business intelligence tools that complement Salesforce Experience with agile development methodologies in an enterprise environment Background in implementing Salesforce solutions for companies with similar business models to

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Title: Backend Developer Job Type: Full-time Location: On-site, Hyderabad, Telangana, India About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: • Develop, test, and maintain scalable backend components and microservices using Python and PySpark. • Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. • Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. • Integrate machine learning models into production-grade backend systems powering innovative AI features. • Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. • Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. • Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: • Proficient in Python for backend development with strong coding standards. • Practical experience with Databricks and PySpark in live production environments. • Advanced knowledge of MySQL database design, query optimization, and maintenance. • Solid foundation in machine learning concepts and deploying ML models in backend systems. • Experience utilizing Redis for effective caching and state management. • Outstanding written and verbal communication abilities with strong attention to detail. • Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: • Background in high-growth AI/ML or complex data engineering projects. • Familiarity with additional backend technologies or cloud-based platforms. • Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Current scope and span of work: Summary : Need is for a data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholders’ needs. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications are aligned with business objectives. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark. - Strong understanding of data integration techniques and ETL processes. - Experience with cloud-based application development and deployment. - Familiarity with agile development methodologies and practices. - Ability to troubleshoot and optimize application performance. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Haveli, Maharashtra, India

On-site

Linkedin logo

We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Sr. Software Engineer - Azure Power BI Job Date: Jun 15, 2025 Job Requisition Id: 61602 Location: Pune, IN Pune, MH, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Power BI Professionals in the following areas : Job Description: The candidate should have good Power BI hands on experience with robust background in Azure data modeling and ETL (Extract, Transform, Load) processes. The candidate should have essential hands-on experience with Advanced SQL and python. Proficiency in building Data Lake and pipelines using Azure. MS Fabric Implementation experience. Additionally, knowledge and experience in Quality domain; and holding Azure certifications, are considered a plus. Required Skills: 7 + years of experience in software engineering, with a focus on data engineering. Proven 5+ year of extensive hands-on experience in Power BI report development. Proven 3+ in data analytics, with a strong focus on Azure data services. Strong experience in data modeling and ETL processes. Advanced Hands-on SQL and Python knowledge and experience working with relational databases for data querying and retrieval. Drive best practices in data engineering, data modeling, data integration, and data visualization to ensure the reliability, scalability, and performance of data solutions. Should be able to work independently end to end and guide other team members. Exposure to Microsoft Fabric is good to have. Good knowledge of SAP and quality processes. Excellent business communication skills. Good data analytical skills to analyze data and understand business requirements. Excellent knowledge of SQL for performing data analysis and performance tuning Ability to test and document end-to-end processes Proficient in MS Office suite (Word, Excel, PowerPoint, Access, Visio) software Proven strong relationship-building and communication skills with team members and business users Excellent communication and presentation skills, with the ability to effectively convey technical concepts to non-technical stakeholders. Partner with business stakeholders to understand their data requirements, challenges, and opportunities, and identify areas where data analytics can drive value. Desired Skills: Extensive hands-on experience with Power BI. Proven experience 5+ in data analytics with a strong focus on Azure data services and Power BI. Exposure to Azure Data Factory, Azure Synapse Analytics, Azure Databricks. Solid understanding of data visualization and engineering principles, including data modeling, ETL/ELT processes, and data warehousing concepts. Experience on Microsoft Fabric is good to have. Strong proficiency in SQL HANA Modelling experience is nice to have. Business objects, Tableau nice to have. Experience of working in Captive is a plus Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders. Strong problem-solving skills and the ability to thrive in a fast-paced, dynamic environment. Responsibilities: Responsible for working with Quality and IT teams to design and implement data solutions. This includes responsibility for the method and processes used to translate business needs into functional and technical specifications. Design, develop, and maintain robust data models, ETL pipelines and visualizations. Responsible for building Power BI reports and dashboards. Responsible for building new Data Lake in Azure, expanding and optimizing our data platform and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. Responsible for designing and developing solutions in Azure big data frameworks/tools: Azure Data Lake, Azure Data Factory, Fabric Develop and maintain Python scripts for data processing and automation. Troubleshoot and resolve data-related issues and provide support for escalated technical problems. Process Improvement Ensure data quality and integrity across various data sources and systems. Maintain quality of data in the warehouse, ensuring integrity of data in the warehouse, correcting any data problems Participate in code reviews and contribute to best practices for data engineering. Ensure data security and compliance with relevant regulations and best practices. Develop standards, process flows and tools that promote and facilitate the mapping of data sources, documenting interfaces and data movement across the enterprise. Ensure design meets the requirements Education: IT Graduate (BE, BTech, MCA) preferred At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved. Show more Show less

Posted 1 day ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies