Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
10 - 14 Lacs
Chennai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure successful project delivery- Implement best practices for application design and configuration Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of big data processing- Experience with data processing frameworks like Apache Spark- Hands-on experience in building scalable applications- Knowledge of cloud platforms for application deployment Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based at our Chennai office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 month ago
5.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the implementation of data platform solutions.- Conduct regular data platform performance assessments.- Identify and address data platform security vulnerabilities.- Stay updated on emerging data platform technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of cloud-based data platforms.- Experience in designing and implementing data pipelines.- Knowledge of data governance and compliance standards.- Experience with data modeling and database design. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 month ago
5.0 - 10.0 years
5 - 9 Lacs
Kolkata
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : Fulltime 15 years qualificationRole and Responsibilities:1. Design, create, code, and support a variety of data pipelines and models on GCP cloud technology 2. Strong hand-on exposure to GCP services like BigQuery, Composer etc.3. Partner with business/data analysts, architects, and other key project stakeholders to deliver data requirements.4. Developing data integration and ETL (Extract, Transform, Load) processes.5. Support existing Data warehouses & related pipelines.6. Ensuring data quality, security, and compliance.7. Optimizing data processing and storage efficiency, troubleshoot issues in Data space.8. Seeks to learn new skills/tools utilized in Data space (ex:dbt, MonteCarlo etc.)9. Excellent communication skills- verbal and written, Excellent analytical skills with Agile mindset.10. Demonstrates strong affinity towards paying attention to details and delivery accuracy.11. Self-motivated team player and should have ability to overcome challenges and achieve desired results.12. Work effectively in Global distributed environment.13. Employee should be ready to work in shift B i.e. 12:30 pm to 10:30 pm14. Employee should be ready to work as individual contributor Skill Proficiency Expectation:Expert:Data Storage, BigQuery,SQL,Composer,Data Warehousing ConceptsIntermidate Level:PythonBasic Level/Preferred:DB,Kafka, Pub/Sub Additional Information:- The candidate should have a minimum of 5 years of experience in Google BigQuery.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions.- This position is based at our Mumbai office. Qualification Fulltime 15 years qualification
Posted 1 month ago
7.0 - 12.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process, collaborating with team members, and making key decisions to ensure project success. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process effectively- Ensure timely delivery of projects- Provide guidance and mentorship to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of big data processing- Experience with data manipulation and transformation- Hands-on experience in building scalable applications- Knowledge of cloud platforms and services Additional Information:- The candidate should have a minimum of 7.5 years of experience in PySpark- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Hyderabad
Work from Office
You will be a key member of our Data Engineering team, focused on designing, developing, and maintaining robust data solutions on on-premise environments. You will work closely with internal teams and client stakeholders to build and optimize data pipelines and analytical tools using Python, PySpark, SQL, and Hadoop ecosystem technologies. This role requires deep hands-on experience with big data technologies in traditional data center environments (non-cloud). What you ll be doing Design, build, and maintain on-premise data pipelines to ingest, process, and transform large volumes of data from multiple sources into data warehouses and data lakes Develop and optimize PySpark and SQL jobs for high-performance batch and real-time data processing Ensure the scalability, reliability, and performance of data infrastructure in an on-premise setup Collaborate with data scientists, analysts, and business teams to translate their data requirements into technical solutions Troubleshoot and resolve issues in data pipelines and data processing workflows Monitor, tune, and improve Hadoop clusters and data jobs for cost and resource efficiency Stay current with on-premise big data technology trends and suggest enhancements to improve data engineering capabilities Bachelor s degree in Computer Science, Software Engineering, or a related field 5+ years of experience in data engineering or a related domain Strong programming skills in Python (with experience in PySpark) Expertise i
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Hyderabad
Work from Office
You will be a key member of our Data Engineering team, focused on designing, developing, and maintaining robust data solutions on on-premise environments. You will work closely with internal teams and client stakeholders to build and optimize data pipelines and analytical tools using Python, PySpark, SQL, and Hadoop ecosystem technologies. This role requires deep hands-on experience with big data technologies in traditional data center environments (non-cloud). What you ll be doing Design, build, and maintain on-premise data pipelines to ingest, process, and transform large volumes of data from multiple sources into data warehouses and data lakes Develop and optimize PySpark and SQL jobs for high-performance batch and real-time data processing Ensure the scalability, reliability, and performance of data infrastructure in an on-premise setup Collaborate with data scientists, analysts, and business teams to translate their data requirements into technical solutions Troubleshoot and resolve issues in data pipelines and data processing workflows Monitor, tune, and improve Hadoop clusters and data jobs for cost and resource efficiency Stay current with on-premise big data technology trends and suggest enhancements to improve data engineering capabilities Bachelor s degree in Computer Science, Software Engineering, or a related field 6+ years of experience in data engineering or a related domain Strong programming skills in Python (with experience in PySpa
Posted 1 month ago
2.0 - 5.0 years
18 - 21 Lacs
Hyderabad
Work from Office
Overview Annalect is currently seeking a data engineer to join our technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design, and development of software products as well as research and evaluation of new technical solutions. Responsibilities Design, build, test and deploy scalable and reusable systems that handle large amounts of data. Collaborate with product owners and data scientists to build new data products. Ensure data quality and reliability Qualifications Experience designing and managing data flows. Experience designing systems and APIs to integrate data into applications. 4+ years of Linux, Bash, Python, and SQL experience 2+ years using Spark and other frameworks to process large volumes of data. 2+ years using Parquet, ORC, or other columnar file formats. 2+ years using AWS cloud services, esp. services that are used for data processing e.g. Glue, Dataflow, Data Factory, EMR, Dataproc, HDInsights , Athena, Redshift, BigQuery etc. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities • ARCHITECTURE AND DESIGN FOR DATA ENGINEERING AND MACHINE LEARNING PROJECTS Establishing architecture and target design for data engineering and machine learning projects. • REQUIREMENT ANALYSIS, PLANNING, EFFORT AND RESOURCE NEEDS ESTIMATION Current inventory analysis, review and formalize requirements, project planning and execution plan. • ADVISORY SERVICES AND BEST PRACTICES Troubleshooting, Performance Tuning, Cost Optimization, Operational Runbooks and Mentoring • LARGE MIGRATIONS Assist customers with large migrations to Databricks from Hadoop ecosystems, Data Warehouses (Teradata, DataStage, Netezza, Ab Initio), ETL engines (Informatica), SAS, SQL, DW, Cloud-based Data platforms like Redshift, Snowflake, EMR, etc • DESIGN, BUILD AND OPTIMIZE DATA PIPELINES The Databricks implementation will be best in class, with flexibility for future iterations. • PRODUCTION READINESS Assisting with production readiness for customers, including exception handling, production cutover, capture analysis, alert scheduling and monitoring • MACHINE LEARNING (ML) – MODEL REVIEW, TUNING, ML OPERATIONS AND OPTIMIZATION Build and review ML models, ML best practices, model lifecycle, ML frameworks and deploying of models in production. Must Have: ▪ Pre- Sales experience is a must. ▪ Hands on experience with distributed computing framework like DataBricks, Spark Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) ▪ Willing to work with product teams to best optimize product features/functions. ▪ Experience on Batch workloads and real time streaming with high volume data frequency. ▪ Performance optimization on Spark workloads ▪ Environment setup, user management, Authentication and cluster management on Databricks ▪ Professional curiosity and the ability to enable yourself in new technologies and tasks. ▪ Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Key Skills: • Python, SQL and Pyspark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, Hbase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) • AWS (Lambda,AWS Glue, S3, Redshift) • Data Modelling, ETL Methodology
Posted 1 month ago
2.0 - 4.0 years
0 Lacs
Greater Hyderabad Area
On-site
About Us: Join our stealth-mode AI startup on a mission to revolutionize AI and data solutions. Headquartered in Hyderabad, we are a well-funded startup with a world-class team and a passion for innovation in AI, NLP, Computer Vision, and Speech Recognition. We are looking for a highly motivated Data Engineer with 2 to 4 years of experience to join our team and work on cutting-edge projects in AI and big data technologies. Role Overview: As a Data Engineer , you will design, build, and optimize scalable data pipelines and platforms to support our AI-driven solutions. You’ll collaborate with cross-functional teams to enable real-time data processing and insights for enterprise-level applications. Key Responsibilities: Develop and maintain robust data pipelines using tools like PySpark , Kafka , and Airflow . Design and optimize data workflows for high scalability and performance using Hadoop , HDFS , and Hive . Integrate structured and unstructured data from diverse sources into a centralized platform. Leverage big data technologies for real-time processing and streaming using Spark Streaming and Nifi . Work on cloud-based platforms such as AWS , Azure , and GCP to deploy and monitor scalable data solutions. Collaborate with AI/ML teams to deploy machine learning models using MLflow and integrate AI capabilities into data pipelines. Automate and monitor workflows to ensure seamless operations using CI/CD pipelines , Kubernetes , and Docker . Implement data validation, performance testing, and troubleshooting of large-scale datasets. Prepare and share actionable insights through BI tools like Tableau and Grafana . Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields. Experience: 2 to 4 years in data engineering roles, working with big data ecosystems. Technical Proficiency: Big Data Tools: Hadoop, HDFS, PySpark, Hive, Sqoop, Kafka, Spark Streaming, Airflow, Presto, Nifi. Cloud Platforms: AWS (Glue, S3, EMR), Azure (ADF, HDInsight), GCP (BigQuery, Pub/Sub). Programming Languages: Python, SQL, Scala. DevOps & Automation: Jenkins, Ansible, Kubernetes, Docker. Databases: MySQL, Oracle, HBase, Redis. Visualization Tools: Tableau, Grafana, Zeppelin. Knowledge of machine learning models, AI tools (e.g., TensorFlow, H2O), and feature engineering is a plus. Strong problem-solving skills with attention to detail and ability to manage multiple projects. Excellent communication and collaboration skills in a fast-paced environment. What We Offer: Opportunity to work on innovative AI projects with a global impact. Collaborative work culture with access to cutting-edge technologies.
Posted 1 month ago
2.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Project Role : Security Delivery Lead Project Role Description : Leads the implementation and delivery of Security Services projects, leveraging our global delivery capability (method, tools, training, assets). Must have skills : Product Security Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :AI Security Architect Enterprise AI Strategy, Scalable ML Platforms, and Secure AI DesignWe are looking for a seasoned and visionary AI Architect with 12+ years of experience in designing, securing, and leading scalable, responsible AI systems. This role blends AI solution architecture with security architecture and is ideal for professionals who bring together deep technical knowledge, strategic thinking, and a passion for trustworthy, ethical innovation.As an AI Architect, you will define the enterprise AI and security architecture, embed secure-by-design practices across AI platforms, and ensure alignment with privacy, compliance, and ethical standards across the entire ML lifecycle Roles & Responsibilities:Own the architectural vision for enterprise-wide AI and ML platforms, ensuring scalability, resilience, security, and regulatory compliance.Develop and maintain architectural blueprints for secure and responsible AI, covering areas such as bias mitigation, explainability, threat modeling, and data protection.Define and implement AI security architecture practices, including secure access to models, datasets, APIs, and ML pipelines.Collaborate with MLOps, engineering, DevSecOps, and cloud security teams to develop standardized, reusable, and secured AI infrastructure components.Ensure AI systems comply with global regulations and standards (e.g., GDPR, ISO 42001, NIST AI RMF, and ISO/IEC 27001).Evaluate and introduce tools and frameworks that support privacy-preserving AI, adversarial robustness, model security, and interpretability.Lead efforts to design and enforce secure AI development workflows, from data ingestion to model deployment and monitoring.Partner with Security Architects and Risk teams to identify and mitigate AI-specific attack surfaces, including adversarial attacks and model poisoning.Conduct risk assessments and threat modeling for AI systems, including LLMs, generative models, and federated learning architectures.Collaborate with internal InfoSec, Privacy, and Legal stakeholders to align AI initiatives with enterprise cybersecurity strategies.Establish monitoring and incident response guidelines for AI workloads, including model drift, data leakage, and compliance alerts.Lead and mentor a multidisciplinary team of AI engineers, ML architects, and AI security specialists.Drive cross-functional initiatives with stakeholders in cloud, legal, compliance, and business domains to ensure holistic AI strategy implementation.Serve as a strategic advisor on AI and ML security topics across various business units and projects.Support the development and enforcement of enterprise-wide AI security and governance policies.Lead architecture review boards focused on AI and ensure consistent application of best practices across AI platforms. Professional & Technical Skills: Strong experience designing and deploying secure, large-scale ML systems in cloud and hybrid environments.Deep understanding of secure development practices, identity and access management (IAM) for ML workloads, model versioning, and auditability.Familiarity with:oCloud-native security tools (AWS IAM, KMS, GCP Workload Identity, Azure Key Vault)oAI attack mitigation (e.g., adversarial training, input sanitization, model watermarking)oSecure MLOps and CI/CD for AIoTools for model explainability (SHAP, LIME), monitoring (Prometheus, Grafana), and compliance tracking.Experience with data privacy, encryption techniques (at rest/in transit/in use), and secure federated learning is a plus.Proven leadership in AI security architecture and secure ML engineering practices.Exceptional stakeholder communication and ability to advocate for responsible AI across technical and executive teams.Strategic mindset with an ability to balance innovation with risk mitigation.Strong documentation, risk assessment, and audit reporting skills in security-centric environments.Proven success in building and securing AI platforms with strong focus on privacy, ethical AI, and regulatory compliance.- Additional Information:Bachelors or Masters degree in Computer Science, Artificial Intelligence, Information Security, or related field.Industry certifications preferred:Cloud AI (e.g., AWS Certified Machine Learning Specialty, GCP ML Engineer)Security (e.g., CISSP, CCSP, Certified AI Security Professional, TOGAF)- 12+ years of experience in AI/ML solution architecture with 4+ years focused on AI security, governance, or compliance.- This position is based at our Bengaluru office- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
9 - 14 Lacs
Pune
Work from Office
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI to improve performance and efficiency, including but not limited to deep learning, neural networks, chatbots, natural language processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : GCP Dataflow, Google Dataproc, Google Pub/SubMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence to enhance performance and efficiency. Your typical day will involve collaborating with cross-functional teams to design and implement innovative solutions, utilizing advanced technologies such as deep learning and natural language processing. You will also be responsible for analyzing data and refining algorithms to ensure optimal functionality and user experience, while continuously exploring new methodologies to drive improvements in AI applications. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and development of AI-driven applications to meet project requirements.- Collaborate with team members to troubleshoot and resolve technical challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Cloud Machine Learning Services.- Good To Have Skills: Experience with GCP Dataflow, Google Pub/Sub, Google Dataproc.- Strong understanding of machine learning frameworks and libraries.- Experience in deploying machine learning models in cloud environments.- Familiarity with data preprocessing and feature engineering techniques. Additional Information:- The candidate should have minimum 2 years of experience in Google Cloud Machine Learning Services.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
7.0 - 10.0 years
10 - 15 Lacs
Ballari, Chitradurga
Work from Office
We are looking for a highly skilled and experienced Branch Receivable Manager to join our team at Equitas Small Finance Bank. The ideal candidate will have 7-10 years of experience in the BFSI industry, with expertise in Assets, Inclusive Banking, SBL, Mortgages, and Receivables. Roles and Responsibility Manage and oversee branch receivables operations for efficient cash flow. Develop and implement strategies to improve receivables management. Collaborate with cross-functional teams to resolve customer issues and enhance service quality. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Lead and motivate a team of receivables professionals to achieve business objectives. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, and receivables. Excellent leadership and communication skills. Ability to analyze data and make informed decisions. Strong problem-solving and customer service skills. Proficiency in financial software and systems.
Posted 1 month ago
1.0 - 4.0 years
1 - 3 Lacs
Salem, Edappadi, Erode
Work from Office
We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.
Posted 1 month ago
1.0 - 4.0 years
1 - 3 Lacs
Puducherry, Mayiladuthurai, Karaikal
Work from Office
We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.
Posted 1 month ago
1.0 - 4.0 years
1 - 3 Lacs
Chidambaram, Mayiladuthurai, Cuddalore
Work from Office
We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications. Location: Mayiladuthurai,Chidambaram,Cuddalore,Tittakudi
Posted 1 month ago
1.0 - 5.0 years
1 - 3 Lacs
Madurai, Dindigul, Oddanchatram
Work from Office
We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.
Posted 1 month ago
7.0 - 12.0 years
12 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Dear Candidate, Greetings of the day!!! Location- Bangalore, Hyderabad , Pune and Chennai Experience- 3.5 years to 13 years Short JD- Job Description Key skills- Spark or Pyspark or Scala (Any big data skill is fine) All skills are good to have. Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) 1. Minimum 3-12 years of experience in build & deployment of Bigdata applications using SparkSQL, SparkStreaming in Python; 2. Minimum 2 years of extensive experience in design, build and deployment of Python-based applications; 3. Design and develop ETL integration patterns using Python on Spark. Develop framework for converting existing PowerCenter mappings and to PySpark(Python and Spark) Jobs. Expertise on graph algorithms and advanced recursion techniques. Hands-on experience in generating/parsing XML, JSON documents, and REST API request/responses. Good-to-Have Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities.
Posted 1 month ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Engineer to build and maintain data pipelines for our analytics platform. Perfect for engineers focused on data processing and scalability. Key Responsibilities: Design and implement ETL processes Manage data warehouses and ensure data quality Collaborate with data scientists to provide necessary data Optimize data workflows for performance Required Skills & Qualifications: Proficiency in SQL and Python Experience with data pipeline tools like Apache Airflow Familiarity with big data technologies (Spark, Hadoop) Bonus: Knowledge of cloud data services (AWS Redshift, Google BigQuery) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Mumbai
Work from Office
The Senior Spark Tech Lead will be responsible for integrating and maintaining the Quantexa platform, a spark based software provided by a UK fintech, into our existing systems to enhance our anti-money laundering capabilities. This role requires a deep expertise in Spark development, as well as an ability to analyze and understand underlying data. Additionally, the candidate should have an interest in exploring open-source applications distributed by Apache, Kubernetes, OpenSearch and Oracle. Should be able to work as a Scrum Master Responsibilities Direct Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Contributing Responsibilities Taking complete ownership of project activities and understand each tasks in details. Ensure that the team delivers on time without any delays and deliveries are of high quality standards. Estimation, Planning and scheduling of the project. Ensure all internal timelines are respected and project is on track. Work with team to develop robust software adhering to the timelines & following all the standard guidelines. Act proactively to ensure smooth team operations and effective collaboration Make sure team adheres to all compliance processes and intervene if required Task assignment to the team and tracking until task completion Proactive Status reporting to the management. Identify Risks in the project and highlight to Manager. Create Contingency and Backup planning as necessary. Create Mitigation Plan. Take decision by own based on situation. Play the role of mentor and coach team members as and when required to meet the target goals Gain functional knowledge on applications worked upon Create knowledge repositories for future reference. Arrange knowledge sharing sessions to enhance team's functional capability. Evaluation of new tools and coming with POCs. Provide feedback of team to upper management on timely basis Technical & Behavioral Competencies Key Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Required Qualifications 7+ Years of experience in development Extensive experience in Hadoop, Spark, Scala development (5 years min). Strong analytical skills and experience in data analysis (SQL), data processing (such as ETL), parsing, data mapping and handling real-life data quality issues. Excellent problem-solving abilities and attention to detail. Strong communication and collaboration skills. Experience in Agile development. High quality coding skill, incl. code control, unit testing, design, and documentation (code, test). Experience with tools such as sonar. Experience with GIT, Jenkins. Specific Qualifications (if required) Experience with development and deployment of spark application and deployment on Kubernetes clusters Hands-on development experience (Java, Scala, etc.) via system integration projects, Python, Elastic (optional). Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Adaptability Creativity & Innovation / Problem solving Attention to detail / rigor Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to develop and adapt a process Ability to develop and leverage networks Choose an item. Choose an item. Education Level: Bachelor Degree or equivalent Experience Level At least 7 years Fluent in English Team player Strong analytical skills Quality oriented and well organized Willing to work under pressure and mission oriented Excellent Oral and Written Communication Skills, Motivational Skills, Results-Oriented
Posted 1 month ago
6.0 - 10.0 years
13 - 17 Lacs
Bengaluru
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 1 month ago
6.0 - 10.0 years
13 - 17 Lacs
Pune
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities Let's unleash your full potential. See Beyond, Rise Above
Posted 1 month ago
6.0 - 10.0 years
13 - 17 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 1 month ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Client: Our Client is a multinational IT services and technology, products, and learning organization founded in 2002. They provide a range of services including consulting, staffing, and managed services for platforms like Salesforce, as well as digital engineering solutions. The company emphasizes collaboration, innovation, and adherence to industry standards to deliver high-quality solutions across various sectors and a global presence with offices in the US, India, Europe, Canada, and Singapore. Job Title: Sr Java/ Bigdata Location: Chennai Experience: 7+ Years Job Type : Contract to hire. Notice Period: Immediate joiners. Key skills: Sr Java/ Bigdata 1.Core Java OOP principles, Collections (HashMap, List, Set, etc.), Multithreading Java Memory Management, Garbage Collection Exception Handling, Design Patterns 2.Spring Boot Dependency Injection, Annotations (@Component, @Service, etc.) RESTful API development, Exception handling, Spring Security Auto Configuration, Actuators, Profiles 3.SQL Joins, Aggregations, Subqueries Query optimization, Indexes, Execution plans Complex SQL writing, Stored Procedure 4.Big Data Tools Hadoop, Spark (RDD, DataFrame APIs, transformations/actions), Hive Batch vs Streaming, Partitioning, Performance tuning Data ingestion tools (Kafka, Sqoop, Flume) 5.Architecture Microservices, Message Queues, Event-Driven Design Scalability, Fault Tolerance, Transaction Management
Posted 1 month ago
5.0 - 10.0 years
25 - 35 Lacs
Chennai
Hybrid
Data Software Engineer Job Description: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks or GCP 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology
Posted 1 month ago
5.0 - 8.0 years
5 - 9 Lacs
Kolkata
Work from Office
Videonetics Technology Pvt Ltd. is looking for Video Streaming and Model Porting Engineer to join our dynamic team and embark on a rewarding career journey Analyzing customer needs to determine appropriate solutions for complex technical issues Creating technical diagrams, flowcharts, formulas, and other written documentation to support projects Providing guidance to junior engineers on projects within their areas of expertise Conducting research on new technologies and products in order to recommend improvements to current processes Developing designs for new products or systems based on customer specifications Researching existing technologies to determine how they could be applied in new ways to solve problems Reviewing existing products or concepts to ensure compliance with industry standards, regulations, and company policies Preparing proposals for new projects, identifying potential problems, and proposing solutions Estimating costs and scheduling requirements for projects and evaluating results
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France