Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... We are seeking a visionary and technically strong Senior AI Architect to join our Billing IT organization in driving innovation at the intersection of telecom billing, customer experience, and artificial intelligence. This leadership role will be pivotal in designing, developing, and scaling AI-led solutions that redefine how we bill our customers, improve their billing experience, and derive actionable insights from billing data. You will work closely with cross-functional teams to lead initiatives that transform customer-facing systems, backend data platforms, and software development practices through modern AI technologies. Key Responsibilities Customer Experience Innovation: Designing and implementing AI-driven enhancements to improve telecom customer experience, particularly in the billing domain. Leading end-to-end initiatives that personalize, simplify, and demystify billing interactions for customers. AI Tools and Platforms: Evaluating and implementing cutting-edge AI/ML models, LLMs, SLMs, and AI-powered solutions for use across the billing ecosystem. Developing prototypes and production-grade AI tools to solve real-world customer pain points. Prompt Engineering & Applied AI: Exhibiting deep expertise in prompt engineering and advanced LLM usage to build conversational tools, intelligent agents, and self-service experiences for customers and support teams. Partnering with design and development teams to build intuitive AI interfaces and utilities. AI Pair Programming Leadership: Demonstrating hands-on experience with AI-assisted development tools (e.g., GitHub Copilot, Codeium). Driving adoption of such tools across development teams, track measurable productivity improvements, and integrate into SDLC pipelines. Data-Driven Insight Generation: Leading large-scale data analysis initiatives using AI/ML methods to generate meaningful business insights, predict customer behavior, and prevent billing-related issues. Establishing feedback loops between customer behavior and billing system design. Thought Leadership & Strategy: Acting as a thought leader in AI and customer experience within the organization. Staying abreast of trends in AI and telecom customer experience; regularly benchmark internal initiatives with industry best practices. Architectural Excellence: Owning and evolve the technical architecture of AI-driven billing capabilities, ensuring scalability, performance, security, and maintainability. Collaborating with enterprise architects and domain leads to align with broader IT and digital transformation goals. Telecom Billing Domain Expertise: Bring deep understanding of telecom billing functions, processes, and IT architectures, including usage processing, rating, billing cycles, invoice generation, adjustments, and revenue assurance. Providing architectural guidance to ensure AI and analytics solutions are well integrated into core billing platforms with minimal operational risk. Where you'll be working... In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What We’re Looking For... You’re energized by the prospect of putting your advanced expertise to work as one of the most senior members of the team. You’re motivated by working on groundbreaking technologies to have an impact on people’s lives. You’ll Need To Have Bachelor’s degree or four or more years of work experience. Six or more years of relevant experience required, demonstrated through one or a combination of work Strong understanding of AI/ML concepts, including generative AI, LLMs (Large Language Models) etc with the ability to evaluate and apply them to solve real-world problems in telecom and billing. Familiarity with industry-leading AI models and platforms (e.g., OpenAI GPT, Google Gemini, Microsoft Phi, Meta LLaMA, AWS Bedrock), and understanding of their comparative strengths, pricing models, and applicability. Ability to scan and interpret AI industry trends, identify emerging tools, and match them to business use cases (e.g., bill explainability, predictive analytics, anomaly detection, agent assist). Skilled in adopting and integrating third-party AI tools—rather than building from scratch—into existing IT systems, ensuring fit-for-purpose usage with strong ROI. Experience working with AI product vendors, evaluating PoCs, and influencing make-buy decisions for AI capabilities. Comfortable guiding cross-functional teams (tech, product, operations) on where and how to apply AI tools, including identifying appropriate use cases and measuring impact. Deep expertise in writing effective and optimized prompts across various LLMs. Knowledge of prompt chaining, tool-use prompting, function calling, embedding techniques, and vector search optimization. Ability to mentor others on best practices for LLM prompt engineering and prompt tuning. In-depth understanding of telecom billing functions: mediation, rating, charging, invoicing, adjustments, discounts, taxes, collections, and dispute management. Strong grasp of billing SLAs, accuracy metrics, and compliance requirements in a telcom environment. Proven ability to define and evolve cloud-native, microservices-based architectures with AI components. Deep understanding of software engineering practices including modular design, API-first development, testing automation, and observability. Experience in designing scalable, resilient systems for high-volume data pipelines and customer interactions. Demonstrated hands-on use of tools like GitHub Copilot, Codeium, AWS CodeWhisperer, etc. Strong track record in scaling adoption of AI pair programming tools across engineering teams. Ability to quantify productivity improvements and integrate tooling into CI/CD pipelines. Skilled in working with large-scale structured and unstructured billing and customer data. Proficiency in tools like SQL, Python (Pandas, NumPy), Spark, and data visualization platforms (e.g., Power BI, Tableau). Experience designing and operationalizing AI/ML models to derive billing insights, detect anomalies, or improve revenue assurance. Excellent ability to translate complex technical concepts to business stakeholders. Influential leadership with a track record of driving innovation, change management, and cross-functional collaboration. Ability to coach and mentor engineers, analysts, and product owners on AI technologies and best practices. Keen awareness of emerging AI trends, vendor platforms, open-source initiatives, and market best practices. Active engagement in AI communities, publications, or proof-of-concept experimentation. Even better if you have one or more of the following: A master’s degree If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 2 days ago
13.0 - 18.0 years
45 - 50 Lacs
Bengaluru
Work from Office
Job Title- Name List Screening and Transaction Screening Model Strats, AS Role Description Group Strategic Analytics (GSA) is part of Group Chief Operation Office (COO) which acts as the bridge between the Banks business and infrastructure functions to help deliver the efficiency, control, and transformation goals of the Bank. You will work within the Global Strategic Analytics Team as part of a global model strategy and deployment of Name List Screening and Transaction Screening. To be successful in that role, you will be familiar with the most recent data science methodologies and have a delivery-centric attitude, strong analytical skills, and a detail-oriented approach to breaking down complex matters into more understandable details. The purpose of Name List Screening and Transaction Screening is to identify and investigate unusual customer names and transactions and behavior, to understand if that activity is considered suspicious from a financial crime perspective, and to report that activity to the government. You will be responsible for helping to implement and maintain the models for Name List Screening and Transaction Screening to ensure that all relevant criminal risks, typologies, products, and services are properly monitored. We are looking for a high-performing Associate in financial crime model development, tuning, and analytics to support the global strategy for screening systems across Name List Screening (NLS) and Transaction Screening (TS). This role offers the opportunity to work on key model initiatives within a cross-regional team and contribute directly to the banks risk mitigation efforts against financial crime. You will support model tuning and development efforts, support regulatory deliverables, and help collaborate with cross-functional teams including Compliance, Data Engineering, and Technology. Your key responsibilities Support the design and implementation of the model framework for name and transaction screening including coverage, data, model development and optimisation. Support key data initiatives, including but not limited to, data lineage, data quality controls, and data quality issues management. Document model logic and liaise with Compliance and Model Risk Management teams to ensure screening systems and scenarios adhere to all model governance standards Participate in research projects on innovative solutions to make detection models more pro-active Assist in model testing, calibration and performance monitoring. Ensure detailed metrics & reporting are developed to provide transparency and maintain effectiveness of name and transaction screening models. Support all examinations and reviews performed by regulators, monitors, and internal audit Your skills and experience Advanced degree (Masters or PhD) in a quantitative discipline (Mathematics, Computer Science, Data Science, Physics or Statistics) 13 years experience in data analytics or model development (internships included). Proficiency in designing, implementing (python, spark, cloud environments) and deploying quantitative models in a large financial institution, preferably in Front Office. Hands-on approach needed. Experience utilizing Machine Learning and Artificial Intelligence Experience with data and the ability to clearly articulate data requirements as they relate to NLS and TS, including comprehensiveness, quality, accuracy and integrity Knowledge of the banks products and services, including those related to corporate banking, investment banking, private banking, and asset management
Posted 2 days ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 2 days ago
5.0 - 8.0 years
4 - 8 Lacs
Chennai
Work from Office
Job Title: Azure Data Engineer (Event Hub) Location: Chennai, India Type: Hybrid Experience: 5-8 Years Job Description: We are seeking a skilled Azure Data Engineer with hands-on experience in Microsoft Fabric and Azure Event Hub to design and implement scalable data solutions for modern data platforms. Required Skills: Mandatory: Microsoft Fabric Hands-on experience with Data Warehousing, Lakehouse, and Real-Time Analytics within Fabric Azure Event Hub Expertise in configuring and managing event streaming Additional Azure Tools: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Gen2 Azure Databricks or Spark-based frameworks Azure Stream Analytics or equivalent stream processing tools Proficiency in SQL, Python or PySpark for data transformation and scripting Strong understanding of data modeling, data warehousing, and distributed data systems
Posted 2 days ago
2.0 - 5.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Job Description Job Title: Senior Sales Development Representative Location: Bangalore Experience Range: 3-5 Years As a Sales Development Representative , you are the first link in our sales process and the driving force behind our growth. You will directly reach out to potential clients, convince them of the value of our services, and ensure we can move forward with the right leads. You lay the foundation for long-term client relationships and play a crucial role in the success of our sales team. What You ll Do: Cold calling - Contact prospects and spark their interest in our IT services. Qualifying leads - Identify customers needs and determine if they are a good fit for our services. Building relationships - You are the first impression of Resillion and set the stage for successful collaboration. Working with sales - Pass warm leads to account managers for further follow-up. Updating the CRM - Keep conversations and results well-documented in our CRM system. Using tools - Utilize Sales Engagement Software, email outreach tools, lead generation tools, and call automation & dialers to work more efficiently and reach the right leads. Working independently - Proactively think of ways to generate leads and apply innovative techniques for success. Why This Role Is Right for You: You love sales! Conversations energize you, and you find satisfaction in achieving goals. No scripts, just freedom - You have the freedom to develop your own approach to convincing customers. Bonuses & growth - Your performance is rewarded with attractive bonuses, and you have opportunities for career growth. Impact - Your work directly contributes to the success of our sales team and the company as a whole. Qualifications: Minimum of 3 year of experience as a Sales Development Representative (SDR) or in a similar cold calling/lead generation role. Independent thinker - You work autonomously and take a creative approach to problem-solving. Resilient to rejection - You stay motivated and focused, even when calls don t immediately result in success. Entrepreneurial mindset - You proactively seek new opportunities and ways to grow. Self-starter - You take responsibility for your own success and work independently toward achieving your goals. Industry experience is a plus - Experience in the IT or tech sector is a big advantage. Driven to earn - You are ambitious and motivated by strong earning potential. Tech-savvy - You have experience with CRM software, Sales Engagement Software, email outreach tools, lead generation tools, and call automation & dialers. Excellent communication skills - You know how to generate interest and steer conversations to persuade clients. Goal-oriented - You thrive on targets and work hard to meet them. Team player - You collaborate well with colleagues to achieve shared success. Qualifications Minimum of 2+ year of experience as a Sales Development Representative (SDR) or in a similar cold calling/lead generation role. Independent thinker - You work autonomous
Posted 2 days ago
7.0 - 12.0 years
18 - 22 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
data scientist engineer,AI/ML,Data collection,Architecture creation,Python,R,Data analysis,Panda, Numpy and Matplot Lib,Git,Tensorflow,Pytorch, Scikit-Learn, Keras,Cloud platform( AWS/ AZure/ GCP), Docker kubernetis, Big Data,Hadoop, Spark,
Posted 2 days ago
15.0 - 20.0 years
100 - 200 Lacs
Bengaluru
Hybrid
What Youll Do: Play a key role in developing and driving a multi-year technology strategy for a complex platform Directly and indirectly manage several senior software engineers (architects) and managers by providing coaching, guidance, and mentorship to grow the team as well as individuals Lead multiple software development teams - architecting solutions at scale to empower the business, and owning all aspects of the SDLC: design, build, deliver, and maintain Inspire, coach, mentor, and support your team members in their day to day work and their long term professional growth Attract, onboard, develop and retain diverse top talents, while fostering an inclusive and collaborative team and culture (our latest DEI Report) Lead your team and peers by example. As a senior member of the team your methodologies, technical and operational excellence practices, and system designs will help to continuously improve our domain Identify, propose, and drive initiatives to advance the technical skills, standards, practices, architecture, and documentation of our engineering teams Facilitate technical debate and decision making with an appreciation for trade-offs Continuously rethink and push the status quo, even when it challenges your/our established ideas. Preferred candidate profile Results-oriented, collaborative, pragmatic, and continuous improvement mindset Hands-on experience driving software transformations within high-growth environments (think complex, cross-continentally owned products) 15+ years of experience in engineering, out of which at least 10 years spent in leading highly performant teams and their managers (please note that a minimum of 5 years in leading fully fledged managers is required) Experience making architectural and design-related decisions for large scale platforms, understanding the tradeoffs between time-to-market vs. flexibility Significant experience and vocation in managing and enabling peoples growth and performance Experience designing and building high-scale generalizable products with outstanding user experience. Practical experience in hiring and developing engineering teams and culture and leading interdisciplinary teams in a fast-paced agile environment Capability to communicate and collaborate across the wider organization, influencing decisions with and without direct authority and always with inclusive, adaptable, and persuasive communication Analytical and decision-making skills that integrate technical and business requirements
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Fusemachines Fusemachines is a 10+ year old AI company, dedicated to delivering state-of-the-art AI products and solutions to a diverse range of industries. Founded by Sameer Maskey, Ph.D., an Adjunct Associate Professor at Columbia University, our company is on a steadfast mission to democratize AI and harness the power of global AI talent from underserved communities. With a robust presence in four countries and a dedicated team of over 400 full-time employees, we are committed to fostering AI transformation journeys for businesses worldwide. At Fusemachines, we not only bridge the gap between AI advancement and its global impact but also strive to deliver the most advanced technology solutions to the world. About The Role This is a remote full-time contractual position , working in the Travel & Hospitality Industry , responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies. We're looking for someone who can quickly ramp up, contribute right away and work independently as well as with junior team members with minimal oversight. We are looking for a skilled Sr. Data Engineer with a strong background in Python , SQL , Pyspark , Redshift, and AWS cloud-based large-scale data solutions with a passion for data quality, performance and cost optimization. The ideal candidate will develop in an Agile environment. This role is perfect for an individual passionate about leveraging data to drive insights, improve decision-making, and support the strategic goals of the organization through innovative data engineering solutions. Qualification / Skill Set Requirement: Must have a full-time Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field 5+ years of real-world data engineering development experience in AWS (certifications preferred). Strong expertise in Python, SQL, PySpark and AWS in an Agile environment, with a proven track record of building and optimizing data pipelines, architectures, and datasets, and proven experience in data storage, modelling, management, lake, warehousing, processing/transformation, integration, cleansing, validation and analytics A senior person who can understand requirements and design end-to-end solutions with minimal oversight Strong programming Skills in one or more languages such as Python, Scala, and proficient in writing efficient and optimized code for data integration, storage, processing and manipulation Strong knowledge SDLC tools and technologies, including project management software (Jira or similar), source code management (GitHub or similar), CI/CD system (GitHub actions, AWS CodeBuild or similar) and binary repository manager (AWS CodeArtifact or similar) Good understanding of Data Modelling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions Strong SQL skills and experience working with complex data sets, Enterprise Data Warehouse and writing advanced SQL queries. Proficient with Relational Databases (RDS, MySQL, Postgres, or similar) and NonSQL Databases (Cassandra, MongoDB, Neo4j, etc.) Skilled in Data Integration from different sources such as APIs, databases, flat files, and event streaming Strong experience in implementing data pipelines and efficient ELT/ETL processes, batch and real-time, in AWS and using open source solutions, being able to develop custom integration solutions as needed, including Data Integration from different sources such as APIs (PoS integrations is a plus), ERP (Oracle and Allegra are a plus), databases, flat files, Apache Parquet, event streaming, including cleansing, transformation and validation of the data Strong experience with scalable and distributed Data Technologies such as Spark/PySpark, DBT and Kafka, to be able to handle large volumes of data Experience with stream-processing systems: Storm, Spark-Streaming, etc. is a plus Strong experience in designing and implementing Data Warehousing solutions in AWS with Redshift. Demonstrated experience in designing and implementing efficient ELT/ETL processes that extract data from source systems, transform it (DBT), and load it into the data warehouse Strong experience in Orchestration using Apache Airflow Expert in Cloud Computing in AWS, including deep knowledge of a variety of AWS services like Lambda, Kinesis, S3, Lake Formation, EC2, EMR, ECS/ECR, IAM, CloudWatch, etc Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent Good understanding of BI solutions, including Looker and LookML (Looker Modelling Language) Strong knowledge and hands-on experience of DevOps principles, tools and technologies (GitHub and AWS DevOps), including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform), configuration management, automated testing, performance tuning and cost management and optimization Good Problem-Solving skills: being able to troubleshoot data processing pipelines and identify performance bottlenecks and other issues Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive Strong project management and organizational skills Excellent communication skills to collaborate with cross-functional teams, including business users, data architects, DevOps/DataOps/MLOps engineers, data analysts, data scientists, developers, and operations teams. Essential to convey complex technical concepts and insights to non-technical stakeholders effectively Ability to document processes, procedures, and deployment configurations Responsibilities: Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance Ensuring the scalability, reliability, quality and performance of data systems Mentoring and guiding junior/mid-level data engineers Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis Design architecture, observability and testing strategies, and build reliable infrastructure and data pipelines Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations Conduct a Discovery on the existing Data Infrastructure and Proposed Architecture Evaluate and implement cutting-edge technologies and methodologies, and continue learning and expanding skills in data engineering and cloud platforms, to improve and modernize existing data systems Evaluate, design, and implement data governance solutions: cataloguing, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns Define and document data engineering architectures, processes and data flows Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive) Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities Fusemachines is an Equal opportunity employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws. Powered by JazzHR SC1hyFVwpp Show more Show less
Posted 2 days ago
1.0 - 5.0 years
7 - 8 Lacs
Bengaluru
Work from Office
Diverse Lynx is looking for Snowflake Developer to join our dynamic team and embark on a rewarding career journeyA Developer is responsible for designing, developing, and maintaining software applications and systems. They collaborate with a team of software developers, designers, and stakeholders to create software solutions that meet the needs of the business.Key responsibilities:Design, code, test, and debug software applications and systemsCollaborate with cross-functional teams to identify and resolve software issuesWrite clean, efficient, and well-documented codeStay current with emerging technologies and industry trendsParticipate in code reviews to ensure code quality and adherence to coding standardsParticipate in the full software development life cycle, from requirement gathering to deploymentProvide technical support and troubleshooting for production issues.Requirements:Strong programming skills in one or more programming languages, such as Python, Java, C++, or JavaScriptExperience with software development tools, such as version control systems (e.g. Git), integrated development environments (IDEs), and debugging toolsFamiliarity with software design patterns and best practicesGood communication and collaboration skills.
Posted 2 days ago
6.0 - 11.0 years
10 - 14 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Perficient India is looking for Lead Technical Consultant - Java + Kafka to join our dynamic team and embark on a rewarding career journey Undertake short-term or long-term projects to address a variety of issues and needs Meet with management or appropriate staff to understand their requirements Use interviews, surveys etc. to collect necessary data Conduct situational and data analysis to identify and understand a problem or issue Present and explain findings to appropriate executives Provide advice or suggestions for improvement according to objectives Formulate plans to implement recommendations and overcome objections Arrange for or provide training to people affected by change Evaluate the situation periodically and make adjustments when needed Replenish knowledge of industry, products and field
Posted 2 days ago
5.0 - 6.0 years
6 - 7 Lacs
Bengaluru
Work from Office
Detailed JD *(Roles and Responsibilities) Candidate would be responsible for Kafka support and testing. Should be familiar with Kafka commands with strong experience in troubleshooting. Exposure to KSQL & rest proxy is preferred.
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
Job Title: Data Analytics and Business Intelligence Engineer (Databricks Specialist) Location: Remote Timings : 6.30PM IST -3.30 AM IST Job Type: Full-Time Job Summary: We are seeking a highly skilled and analytical Data Analytics and Business Intelligence (BI) Engineer with strong experience in Databricks to join our data team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines, dashboards, and analytics solutions that drive business insights and decision-making. Key Responsibilities: Design and implement robust data pipelines using Databricks , Apache Spark , and Delta Lake . Develop and maintain ETL/ELT workflows to ingest, transform, and store large volumes of structured and unstructured data. Build and optimize data models and data marts to support self-service BI and advanced analytics. Create interactive dashboards and reports using tools like Power BI , Tableau , or Looker . Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights. Ensure data quality, integrity, and governance across all analytics solutions. Monitor and improve the performance of data pipelines and BI tools. Stay current with emerging technologies and best practices in data engineering and analytics. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in data engineering, analytics, or BI development. Strong proficiency in Databricks , Apache Spark , and SQL . Experience with cloud platforms such as Azure , AWS , or GCP . Proficiency in Python or Scala for data processing. Hands-on experience with data visualization tools (Power BI, Tableau, etc.). Solid understanding of data warehousing concepts , dimensional modeling , and data lakes . Familiarity with CI/CD pipelines , version control (Git) , and Agile methodologies . Preferred Qualifications: Databricks certification (e.g., Databricks Certified Data Engineer Associate/Professional ). Experience with MLflow , Delta Live Tables , or Unity Catalog . Knowledge of data governance , security , and compliance standards. Strong communication and stakeholder management skills. Show more Show less
Posted 2 days ago
4.0 - 9.0 years
7 - 11 Lacs
Hyderabad
Work from Office
This is what youll do : - The position grows our analytics capabilities with faster, more reliable tools, handling petabytes of data every day. - Brainstorm and create new platforms that can help in our quest to make available to cluster users in all shapes and forms, with low latency and horizontal scalability. - Make changes to our diagnosing any problems across the entire technical stack. - Design and develop a real-time events pipeline for Data ingestion for real-time dash- boarding. - Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake. - Design & implement new components and various emerging technologies in Hadoop Eco- System, and successful execution of various projects. Skills that will help you succeed in this role : - Strong hands-on experience of 4+years with Spark, preferably PySpark etc. - Excellent programming/debugging skills in Python. - Experience with any scripting language such as Python, Bash etc. - Good experience in Databases such as SQL, MongoDB etc - Good to have experience with AWS and cloud technologies such as S3
Posted 2 days ago
6.0 - 11.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Job Title: Concentrix Software Solutions is hiring MERN Software Engineer Job Description About the Company: At Concentrix, we create technology that powers the greatest customer experiences globally. We build software products & platforms that enhances Customer Experience (CX) for the world s best brands. Our transformative solutions are powered by automation, analytics & artificial intelligence to solve exciting CX challenges. Our product R&D teams use design thinking, secure engineering practices and lean agile methods to create unique intellectual property, and produce high quality, secure & intuitive solutions. Working at Global Product Organization (GPO) is rewarding and fulfilling for our engineers, designers & product strategists. Our work environment encourages diversity, teamwork, continuous learning and individual development that fuels creativity and challenges convention. We are truly fanatical about our clients & staff and invest in their long-term future. We take great pride in being recognized as the trusted go-to partner of choice for our customers. Global Product Organization (GPO) is a remote-first organization and has flexible working models for staff, enabling them to contribute efficiently. We are backed by Concentrix Corporation (NASDAQ: CNXC), a leading global provider of Customer Experience technology & services. Must have 6+ years of experience in working in MERN Full Stack Front-end developer with knowledge of coding into NODE/EXPRESS, React, JavaScript, JQuery & its plugins Must have experience with related libraries and frameworks like React Native, Bootstrap, Redux, RxJS. Build tools like Webpack, Browserify, NPM etc. Expertise of various DBMS technology like MySQL, MongoDB, and Oracle. Knowledge of caching mechanisms like varnish, Memcached, Redis is a plus. Must have 3+ years of experience in one of the server-side programming language like Java or Go We will be using Cassandra, Spark, Rapids, Elastic Search, SQL, Kafka, Scalability/Throughput, Distributed Systems, Concurrency, Swagger/API Development, Websockets/gRPC, ChromeOS Extensions, Git Knowledge of responsive design and building application that works across all pixel sizes Strong OOPS knowledge, including experience with design patterns Knowledge of creations and consumption of REST and SOAP Web services is desirable. Exposure to Agile/Scrum, TDD, and Continuous Integration tools like Jenkins, Bitbucket etc. Location: India Bangalore - Divyashree Language Requirements: Time Type: Full time
Posted 2 days ago
2.0 - 6.0 years
10 - 15 Lacs
Hyderabad
Work from Office
JD Senior Data Engineer Job Location : Hyderabad We are looking for an experienced Data Engineer with 3+ years of expertise in Databricks, AWS, Scala, and Apache Spark to work with one of our leading Japanese automotive based Client in North America, where cutting-edge technology meets innovation. The ideal candidate will have a strong foundation in big data processing, cloud technologies, and performance tuning to enable efficient data management. You'll also collaborate with global teams, work with advanced cloud technologies, and contribute to a forward-thinking data ecosystem that powers the future of automotive engineering. Who Can Apply: Only candidates who can join immediately or within 1 week can apply. Ideal for those seeking technical growth and work on a global project with cutting-edge technologies. Best suited for professionals passionate about innovation and problem-solving Key Responsibilities: Architect, design, and implement scalable ETL pipelines for large data processing. Develop and optimize data solutions using Databricks, AWS, Scala, and Spark. Ensure high-performance data processing with distributed computing techniques. Implement best practices for data modeling, transformation, and governance. Work closely with cross-functional teams to improve data reliability and efficiency. Monitor and troubleshoot data pipelines for performance improvements. Required Skills & Qualifications: Excellent communication and ability to handle direct Client interactions. 2+ years of experience in Data Engineering. Expertise in Databricks, AWS, Scala, and Apache Spark. Strong knowledge of big data architecture, ETL processes, and cloud data solutions. Ability to write optimized and scalable Spark jobs for distributed computing.
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Size Mid-Sized Experience Required 3 - 6 years Working Days 5 days/week Office Location Karnataka, Bengaluru Role & Responsibilities Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture. Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities. Develop data pipelines that make data available across platforms. Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform. Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines. Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services. Ideal Candidate 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs. Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis. Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years). Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift. Good understanding of orchestration tools like Airflow. Strong Python and SQL coding skills. Strong Experience in distributed systems like spark. Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc). Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction. Perks, Benefits and Work Culture Work with cutting-edge technologies on high-impact systems. Be part of a collaborative and technically driven team. Enjoy flexible work options and a culture that values learning. Competitive salary, benefits, and growth opportunities. Skills: teams,aws,data extraction,etl,data engineering,data warehousing,cdc,aws data technologies,data modeling,airflow,pipelines,data,spark,kafka connect,sql,python,load,redshift,ml Show more Show less
Posted 2 days ago
5.0 - 8.0 years
9 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job Description : As a Data Engineer for our Large Language Model Project, you will play a crucial role in designing, implementing, and maintaining the data infrastructure. Your expertise will be instrumental in ensuring the efficient flow of data, enabling seamless integration with various components, and optimizing data processing pipelines. 5+ years of relevant experience in data engineering roles. Key Responsibilities : Data Pipeline Development - Design, develop, and maintain scalable and efficient data pipelines to support the training and deployment of large language models. Implement ETL processes to extract, transform, and load diverse datasets into suitable formats for model training. Data Integration - Collaborate with cross-functional teams, including data scientists and software engineers, to integrate data sources and ensure the availability of relevant and high-quality data. Implement solutions for real-time data processing and integration, fostering model development agility. Data Quality Assurance - Establish and maintain robust data quality checks and validation processes to ensure the accuracy and consistency of datasets. Troubleshoot data quality issues, identify root causes, and implement corrective measures. Infrastructure Management - Work closely with DevOps and IT teams to manage and optimize the data storage infrastructure, ensuring scalability and performance. Implement best practices for data security, access control, and compliance with data governance policies. Performance Optimization - Identify bottlenecks and inefficiencies in data processing pipelines and implement optimizations to enhance overall system performance. Continuously monitor and evaluate system performance metrics, making proactive adjustments as needed. Skills & Tools Programming Languages - Proficiency in languages such as Python for building robust data processing applications. Big Data Technologies - Experience with distributed computing frameworks like Apache Spark, Databricks & DBT for large-scale data processing. Database Systems - In-depth knowledge of both relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., Vector databases, MongoDB, Cassandra etc). Data Warehousing - Familiarity with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools - Hands-on experience with ETL tools like Apache NiFi, Talend, or Apache Airflow. Knowledge of NLP will be an added advantage. Cloud Services - Experience with cloud platforms like AWS, Azure, or Google Cloud for deploying and managing data infrastructure. Problem Solving - Analytical mindset with a proactive approach to identifying and solving complex data engineering challenges.
Posted 2 days ago
2.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 2 days ago
3.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands, Experience in Concurrent design and multi-threading Preferred technical and professional experience None
Posted 2 days ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc
Posted 2 days ago
2.0 - 6.0 years
4 - 8 Lacs
Kochi
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 2 days ago
2.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc) . Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions
Posted 2 days ago
2.0 - 5.0 years
4 - 7 Lacs
Navi Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 days ago
3.0 - 7.0 years
5 - 9 Lacs
Chennai
Work from Office
Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python
Posted 2 days ago
2.0 - 5.0 years
4 - 7 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.
These cities have a high concentration of tech companies and startups actively hiring for Spark roles.
The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
Salaries may vary based on the company, location, and specific job requirements.
In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect
Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.
Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases
Having a combination of these skills can make a candidate more competitive in the job market.
As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.