Home
Jobs

2810 Scala Jobs - Page 19

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Your work profile: As a Consultant/Senior Consultant/Manager in our Technology & Transformation you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - To do this, following are the desired qualification and required skills: Good hands-on experience in GCP services including Big Query, Cloud Storage, Dataflow, Cloud Dataproc, Cloud Composer/Airflow, and IAM. Must have proficient experience in GCP Databases : Bigtable, Spanner, Cloud SQL and Alloy DB Proficiency either in SQL, Python, Java, or Scala for data processing and scripting. Experience in development and test automation processes through the CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers) Experience in orchestrating data processing tasks using tools like Cloud Composer or Apache Airflow. Strong understanding of data modeling, data warehousing and big data processing concepts. Solid understanding and experience of relational database concepts and technologies such as SQL, MySQL, PostgreSQL or Oracle. Design and implement data migration strategies for various database types ( PostgreSQL, Oracle, Alloy DB etc.) Deep understanding of at least 1 Database type with ability to write complex SQLs. Experience with NoSQL databases such as MongoDB, Scylla, Cassandra, or DynamoDB is a plus Optimize data pipelines for performance and cost-efficiency, adhering to GCP best practices. Implement data quality checks, data validation, and monitoring mechanisms to ensure data accuracy and integrity. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions. Ability to work independently and manage multiple priorities effectively. Preferably having expertise in end to end DW implementation UG: B. Tech /B.E. in Any Specialization. Location and way of working: Base location: Bengaluru/Hyderabad/Mumbai/Bhubaneshwar/Coimbatore/Delhi This profile involves occasional travelling to client locations. Hybrid is our default way of working. Each domain has customized the hybrid approach to their unique needs. Your role as a Consultant/Senior Consultant/Manager: We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Consultant/Senior Consultant/Manager across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation Committed to creating purpose - Creating a sense of vision and purpose Agile - Achieving high-quality results through collaboration and Team unity Skilled at building diverse capability - Developing diverse capabilities for the future Persuasive / Influencing - Persuading and influencing stakeholders Collaborating - Partnering to build new solutions Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction for the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviors' and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterized by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognize there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organization and the business area you’re applying to. Check out recruiting tips from Deloitte professionals. Show more Show less

Posted 6 days ago

Apply

15.0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

Educational Qualifications: BE/B Tech/ M.E/M To lead the operations of UIDAI's critical infrastructure, primarily hosted on Open-stack on-premise Private Cloud architecture, ensuring 24/7 availability of Aadhaar services. Manage a team of experts to design application deployment architecture to ensure high availability. Manage a team of experts to provide infra-deployment guidelines to bake into app design. Ensure robust security, scalability, and reliability of UIDAI's data centres and networks. Participate in architectural design review sessions, develop proof of concepts/pilots, implement projects, and deliver ongoing upgrades and enhancements. Revamp applications for AADHAR's Private Cloud Deployment in today's constantly shifting digital landscape to increase operational efficiency and reduce infrastructure costs. Role & Innovation & Technology Transformation Align with the Vision, Mission and Core Values UIDAI while closely aligning with inter-disciplinary teams. Lead Cloud Operations/ Infra team in fine-tuning & optimization of cloud-native platforms to improve performance and to achieve cost efficiency. Drive solution design for RFPs, POCs, and pilots for new and upcoming projects or R&D initiatives, using open-source cloud and infrastructure to build a scalable and elastic Data Center. Encourage & create an environment for Knowledge sharing within and outside the UIDAI. To interact/ partner with leading institutes/ R&D establishments/ educational institutions to stay up to date with new technologies and trends in cloud computing. Be a thought leader in architecture design and development of complex operational data analytics solutions to monitor various metrics related to infrastructure and app Architecture Design & the design, implementation, and deployment of OpenStack-based on-premise private cloud infrastructure. Develop scalable, secure, and highly available cloud architectures to meet business and operational needs. Architect and design infrastructure solutions that support both virtualized and containerized workloads. Solution Integration, Performance Monitoring & Integrate OpenStack with existing on-premise data centre systems, network infrastructure, and storage platforms. Work with cross-functional teams to ensure seamless integration of cloud solutions in UIDAI. Monitor cloud infrastructure performance and ensure efficient use of resources. Identify areas for improvement and implement optimizations to reduce costs and improve performance. Security & Compliance: - Implement security best practices for on-premise cloud environments, ensuring data protection and compliance with industry standards. Regularly perform security audits and vulnerability assessments to maintain a secure cloud Collaboration & Collaborate with internal teams (App development and Security) to align cloud infrastructure with UIDAIs requirements and objectives & manage seamless communication within tech teams and across the organization. Maintain detailed live documentation of cloud architecture, processes, and configurations to establish trails of decision-making and ensure transparency and accountability. Role More than 15 years of experience in Technical, Infra and App Solutioning, and at least 7+ years of experience in spearheading large multi-disciplinary technology teams working across various domains in a leadership position. Excellent problem-solving and troubleshooting skills. Must have demonstrable experience in application performance analysis through low-level debugging. Experience on transformation projects for On-Premise data solutions, open-source CMP - OpenStack, CloudStack. Should be well versed with Site Reliability Engineering (SRE) concepts with a focus on extreme automation & infrastructure as code (IaC) methodologies & have led such teams before; including exp on Gitops, and platform automation tools like Terraform, Ansible etc. Strong knowledge of Linux-based operating systems (Ubuntu, CentOS, RedHat, etc). Strong understanding on the HTTP1.1, Http2 with gRPC and HTTP/2 (QUICK) protocol functioning. Experience in System Administration, Server storage, Networking, virtualization, Data Warehouse, Data Integration, Data Migration and Business Intelligence/Artificial Intelligence solutions on the Cloud. Proficient in technology administration, remote infrastructure management, cloud assessment, QA, monitoring, and DevOps practices. Extensive experience in Cloud platform architecture, Private cloud deployment, large-scale ] transformation or migration of applications to cloud-native platforms. Should have experience in building cloud-native platforms on Kubernetes, including awareness & experience of service mesh, cloud-native storage, integration with SAN & NAS, Kubernetes operators, CNI, CSI, CRI etc. Should have strong expertise in networking background in terms of routing, switching, BGP, technologies like TRILL, MP-BGP, EVPN etc. Preferably, should have experience in SAN networking & Linux networking concepts like networking namespaces, route tables, and ss utilities. Experience on Cloud and On-Premise Databases like Cloud SQL, Cloud Spanner, Big Table, RDS, Aurora, DynamoDB, Oracle, Teradata, MySQL, DB2, SQL Server. Exposure to any of the No-SQL databases like Mongo dB, CouchDB, Cassandra, Graph dB, etc. Experience in ML Ops pipeline is preferable. Experience with distributed computing platforms and enterprise environments like Hadoop, GCP/AWS/Azure Cloud is preferred. Experience with various Data Integration, and ETL technologies on Cloud like Spark, Pyspark/Scala, and Dataflow is preferred. (ref:iimjobs.com) Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title - Senior Security Engineer (Application & Cloud Security) Location: Chennai About Tazapay Tazapay is a cross border payment service provider. They offer local collections via local payment methods, virtual accounts and cards in over 70 markets. The merchant does not need to create local entities anywhere and Tazapay offers the additional compliance framework to take care of local regulations and requirements. This results in decreased transaction costs, fx transparency and higher auth rates. They are licensed and backed by leading investors. www.tazapay.com What's exciting waiting for you? This is an amazing opportunity for you to join a fantastic crew before the rocket ship launch. It will be a story you will carry with you through your life and have the unique experience of building something ground up and have the satisfaction of seeing your product being used and paid for by thousands of customers. You will be a part of a growth story in securing critical payment infrastructure that spans both application security and cloud security across 70+ markets. We believe in a culture of openness, innovation & great memories together. About The Senior Security Engineer Role As a Senior Security Engineer, you will play a pivotal role in securing our entire technology stack - from application-level security to cloud infrastructure protection. You will lead comprehensive security initiatives across our AWS cloud environments and payment applications built with Node.js and GoLang microservices, while leveraging AWS security services and modern security tools to protect against evolving threats. This role combines deep technical expertise in both application security and cloud security with leadership responsibilities. Key Responsibilities Application Security Leadership Lead comprehensive security assessments of microservices-based applications built with GoLang, Java, or Scala Conduct advanced security reviews of Vue.js and ReactJS frontend applications and their integration with backend services Execute expert-level manual and automated web application penetration testing using industry-standard methodologies (OWASP Testing Guide, PTES) Design and implement vulnerability scoring and risk assessment frameworks using CVSS, OWASP Risk Rating, and custom business impact metrics Utilize govulncheck for Go-specific vulnerability detection and dependency analysis across microservices Deploy Semgrep/OpenGrep for advanced static code analysis and custom security policy enforcement Integrate Gitleaks for comprehensive secret detection across development workflows Lead secure development lifecycle (SDLC) integration and establish security standards for development teams Perform complex web application penetration testing including authentication bypass, authorization flaws, injection attacks, and business logic vulnerabilities AWS Cloud Security Architecture Design and implement enterprise-level security architecture for AWS cloud environments Configure and optimize AWS Shield (Standard and Advanced) for comprehensive DDoS protection Implement and manage AWS CloudFront security configurations including advanced WAF rules, SSL/TLS, and origin protection Secure complex AWS services including EC2, ECS, EKS, Lambda, RDS, S3, API Gateway, and multi-region deployments Design network security controls using VPC, Security Groups, NACLs, AWS Transit Gateway, and PrivateLink Establish and lead secure CI/CD pipeline implementations for Node.js applications and GoLang microservices Architect container security solutions for Docker and Kubernetes (EKS) environments Security Automation & Monitoring Implement comprehensive security monitoring using AWS CloudTrail, GuardDuty, and Security Hub Deploy and manage Prowler for continuous AWS security assessments and compliance validation Utilize ScoutSuite for multi-cloud security posture management and configuration auditing Configure Gitleaks for continuous secret monitoring across enterprise development workflows Implement Semgrep/OpenGrep rules for real-time security vulnerability detection and policy enforcement Lead automation initiatives using Infrastructure as Code (Terraform, CloudFormation, AWS CDK) Develop advanced security automation scripts and frameworks using Python, Bash, and AWS SDKs Create comprehensive security dashboards and executive reporting mechanisms Vulnerability Management & Risk Assessment Lead enterprise vulnerability management programs with comprehensive scoring using CVSS v3.1, OWASP Risk Rating, and custom business impact assessments Develop sophisticated risk scoring matrices incorporating technical severity, business impact, exploitability, and regulatory requirements Create detailed penetration testing reports with executive summaries, technical findings, and strategic remediation roadmaps Establish vulnerability SLA metrics and track remediation timelines based on risk scores and business priorities Conduct root cause analysis (RCA) on complex security incidents and implement preventive measures Lead threat modeling sessions and strategic risk assessments for new features and infrastructure changes Mentor junior security engineers and provide technical guidance on vulnerability remediation Compliance & Regulatory Security Ensure comprehensive compliance with financial industry regulations (PCI DSS, SOX, GDPR, PSD2) Lead compliance audits and regulatory assessments using Prowler for AWS compliance validation Implement ScoutSuite for comprehensive multi-cloud security auditing Design and maintain data protection controls for sensitive payment processing workloads Develop and maintain disaster recovery and business continuity security plans Lead security aspects of vendor risk assessments and third-party integrations Represent security requirements to business leadership and regulatory bodies Technical Leadership & Strategy Serve as technical security leader for complex cross-functional projects Influence security strategies, standards, and architectural decisions across the organization Lead security initiatives and mentor junior engineers on advanced security practices Participate in strategic security planning and technology evaluation Drive security culture transformation and champion security best practices Represent security needs to executive leadership and board-level communications Experience Required Qualifications 8+ years of experience in information security with demonstrated expertise in both application security and cloud security Extensive experience securing microservices architectures, particularly those built with GoLang, Java, or Scala Advanced experience with AWS cloud security including Shield, CloudFront, and comprehensive security service management Expert-level web application penetration testing experience including complex business logic vulnerabilities and multi-tier architectures Proven leadership in vulnerability scoring and risk assessment using industry-standard frameworks Hands-on expertise with security automation tools: govulncheck, Gitleaks, Semgrep/OpenGrep, Prowler, ScoutSuite Strong experience securing Node.js applications and modern JavaScript frameworks (Vue.js, ReactJS) Experience leading security teams and influencing organizational security strategy Technical Skills Expert-level proficiency in AWS security services including Shield, CloudFront, GuardDuty, Security Hub, WAF, and comprehensive service portfolio Advanced application security expertise across GoLang, Java, Scala, Node.js, Vue.js, and ReactJS technologies Mastery of security automation tools: govulncheck (Go vulnerability scanning), Gitleaks (secret detection), Semgrep/OpenGrep (static analysis), Prowler (AWS security assessment), ScoutSuite (multi-cloud auditing) Expert-level web application penetration testing skills using advanced tools and custom exploitation frameworks Comprehensive knowledge of vulnerability scoring frameworks including CVSS v3.1, OWASP Risk Rating, and FAIR methodology Advanced Infrastructure as Code proficiency (Terraform, CloudFormation, AWS CDK) Expert container and orchestration security (Docker, Kubernetes/EKS, service mesh security) Advanced scripting and automation capabilities (Python, Bash, PowerShell, Go) Enterprise network security and cloud networking expertise Security Expertise Deep understanding of application security principles and advanced penetration testing methodologies Expert knowledge of cloud security frameworks (NIST, CSA, AWS Well-Architected Security Pillar) Advanced understanding of financial services security and payment processing compliance requirements Expertise in security architecture design for complex distributed systems Advanced threat modeling and risk assessment capabilities Comprehensive knowledge of cryptography, PKI, and secure communication protocols Expert-level incident response and forensic analysis skills Advanced understanding of regulatory compliance frameworks and audit requirements Nice to Have Certifications AWS Security Specialty certification (required) Advanced penetration testing certifications (OSCP, GWEB, eWPT, eWPTX) Security leadership certifications (CISSP, CISM, CISSP) Cloud architecture certifications (AWS Solutions Architect Professional, DevOps Engineer Professional) Additional cloud security certifications (Azure Security, GCP Security) Additional Skills Experience with multi-cloud security architectures and hybrid environments Advanced knowledge of serverless security (AWS Lambda, API Gateway, serverless frameworks) Expertise in security orchestration and automated response (SOAR) platforms Experience with machine learning/AI security applications and threat detection Advanced understanding of payment processing security and financial services infrastructure Experience with regulatory examination processes and security audit leadership Knowledge of emerging security technologies and threat landscape evolution Experience with security product evaluation and vendor management Advanced presentation and executive communication skills Key Abilities And Traits Technical Excellence: Demonstrated ability to architect and implement comprehensive security solutions across complex application and cloud environments processing sensitive financial data. Leadership: Proven capability to lead security initiatives across multiple teams, influence strategic decisions, and mentor engineering talent while representing security needs to executive leadership. Strategic Thinking: Ability to balance immediate security needs with long-term strategic objectives, translating business requirements into technical security solutions. Problem-Solving: Expert-level analytical and problem-solving skills with the ability to address complex security challenges spanning application code to cloud infrastructure. Communication: Exceptional verbal and written communication skills, capable of explaining complex security concepts to technical teams, business stakeholders, and executive leadership. Continuous Innovation: Commitment to staying current with emerging security threats, technologies, and industry best practices while driving security innovation within the organization. Project Management: Advanced ability to manage multiple complex security initiatives simultaneously while ensuring compliance with regulatory requirements and business objectives. Mentorship: Strong commitment to developing junior security talent and fostering a security-conscious culture across engineering teams. Join our team and let's groove together to the rhythm of innovation and opportunity! Your Buddy, Tazapay Show more Show less

Posted 6 days ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled and hands-on Data Engineer to join Controls Technology to support the design, development, and implementation of our next-generation Data Mesh and Hybrid Cloud architecture. This role is critical in building scalable, resilient, and future-proof data pipelines and infrastructure that enable the seamless integration of Controls Technology data within a unified platform. The Data Engineer will work closely with the Data Mesh and Cloud Architect Lead to implement data products, ETL/ELT pipelines, hybrid cloud integrations, and governance frameworks that support data-driven decision-making across the enterprise. Key Responsibilities: Data Pipeline Development: Design, build, and optimize ETL/ELT pipelines for structured and unstructured data. Develop real-time and batch data ingestion pipelines using distributed data processing frameworks. Ensure pipelines are highly performant, cost-efficient, and secure. Apache Iceberg & Starburst Integration: Work extensively with Apache Iceberg for data lake storage optimization and schema evolution. Manage Iceberg Catalogs and ensure seamless integration with query engines. Configure and maintain Hive MetaStore (HMS) for Iceberg-backed tables and ensure proper metadata management. Utilize Starburst and Stargate to enable distributed SQL-based analytics and seamless data federation. Optimize performance tuning for large-scale querying and federated access to structured and semi-structured data. Data Mesh Implementation: Implement Data Mesh principles by developing domain-specific data products that are discoverable, interoperable, and governed. Collaborate with data domain owners to enable self-service data access while ensuring consistency and quality. Hybrid Cloud Data Integration: Develop and manage data storage, processing, and retrieval solutions across AWS and on-premise environments. Work with cloud-native tools such as AWS S3, RDS, Lambda, Glue, Redshift, and Athena to support scalable data architectures. Ensure hybrid cloud data flows are optimized, secure, and compliant with organizational standards. Data Governance & Security: Implement data governance, lineage tracking, and metadata management solutions. Enforce security best practices for data encryption, role-based access control (RBAC), and compliance with policies such as GDPR and CCPA. Performance Optimization & Monitoring: Monitor and optimize data workflows, performance tuning of queries, and resource utilization. Implement logging, alerting, and monitoring solutions using CloudWatch, Prometheus, or Grafana to ensure system health. Collaboration & Documentation: Work closely with data architects, application teams, and business units to ensure seamless integration of data solutions. Maintain clear documentation of data models, transformations, and architecture for internal reference and governance. Required Technical Skills: Programming & Scripting: Strong proficiency in Python, SQL, and Shell scripting. Experience with Scala or Java is a plus. Data Processing & Storage: Hands-on experience with Apache Spark, Kafka, Flink, or similar distributed processing frameworks. Strong knowledge of relational (PostgreSQL, MySQL, Oracle) and NoSQL databases (DynamoDB, MongoDB). Expertise in Apache Iceberg for managing large-scale data lakes, schema evolution, and ACID transactions. Experience working with Iceberg Catalogs, Hive MetaStore (HMS), and integrating Iceberg-backed tables with query engines. Familiarity with Starburst and Stargate for federated querying and cross-platform data access. Cloud & Hybrid Architecture: Experience working with AWS data services (S3, Redshift, Glue, Athena, EMR, RDS). Understanding of hybrid data storage and integration between on-prem and cloud environments. Infrastructure as Code (IaC) & DevOps: Experience with Terraform, AWS CloudFormation, or Kubernetes for provisioning infrastructure. CI/CD pipeline experience using GitHub Actions, Jenkins, or GitLab CI/CD. Data Governance & Security: Familiarity with data cataloging, lineage tracking, and metadata management. Understanding of RBAC, IAM roles, encryption, and compliance frameworks (GDPR, SOC2, etc.). Required Soft Skills: Problem-Solving & Analytical Thinking - Ability to troubleshoot complex data issues and optimize workflows. Collaboration & Communication - Comfortable working with cross-functional teams and articulating technical concepts to non-technical stakeholders. Ownership & Proactiveness - Self-driven, detail-oriented, and able to take ownership of tasks with minimal supervision. Continuous Learning - Eager to explore new technologies, improve skill sets, and stay ahead of industry trends. Qualifications: 4-6 years of experience in data engineering, cloud infrastructure, or distributed data processing. Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Hands-on experience with data pipelines, cloud services, and large-scale data platforms. Strong foundation in SQL, Python, Apache Iceberg, Starburst, and cloud-based data solutions (AWS preferred), Apache Airflow Orchestration ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Architecture ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Description Please note even though the GPP mentions Remote, this is a Hybrid role. Key Responsibilities Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Continuously monitor and troubleshoot data quality and integrity issues. Implement data governance processes and methods for managing metadata, access, and retention for internal and external users. Develop reliable, efficient, scalable, and quality data pipelines with monitoring and alert mechanisms using ETL/ELT tools or scripting languages. Develop physical data models and implement data storage architectures as per design guidelines. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Develop and operate large-scale data storage and processing solutions using distributed and cloud-based platforms (e.g., Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB). Use agile development technologies, such as DevOps, Scrum, Kanban, and continuous improvement cycles, for data-driven applications. Responsibilities Qualifications: College, university, or equivalent degree in a relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies System Requirements Engineering: Translate stakeholder needs into verifiable requirements and establish acceptance criteria. Collaborates: Build partnerships and work collaboratively with others to meet shared objectives. Communicates Effectively: Develop and deliver multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer Focus: Build strong customer relationships and deliver customer-centric solutions. Decision Quality: Make good and timely decisions that keep the organization moving forward. Data Extraction: Perform ETL activities from various sources and transform them for consumption by downstream applications and users. Programming: Create, write, and test computer code, test scripts, and build scripts using industry standards and tools. Quality Assurance Metrics: Apply measurement science to assess whether a solution meets its intended outcomes. Solution Documentation: Document information and solutions based on knowledge gained during product development activities. Solution Validation Testing: Validate configuration item changes or solutions using best practices. Data Quality: Identify, understand, and correct flaws in data to support effective information governance. Problem Solving: Solve problems using systematic analysis processes and industry-standard methodologies. Values Differences: Recognize the value that different perspectives and cultures bring to an organization. Qualifications Skills and Experience Needed: Must-Have: 3-5 years of experience in data engineering with a strong background in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Experience with SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Unit Testing of ETL. Experience creating ETL pipelines with ML model integration. Knowledge of Big Data storage strategies (optimization and performance). Critical problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Quick learner. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open source technologies. Experience with SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka. SQL query language proficiency. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement for a cloud-based environment. Exposure to Agile software development. Experience building analytical solutions. Exposure to IoT technology. Work Schedule: Most of the work will be with stakeholders in the US, with an overlap of 2-3 hours during EST hours on a need basis. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2409179 Relocation Package Yes Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Porch Group is a leading vertical software and insurance platform and is positioned to be the best partner to help homebuyers move, maintain, and fully protect their homes. We offer differentiated products and services, with homeowners insurance at the center of this relationship. We differentiate and look to win in the massive and growing homeowners insurance opportunity by providing the best services for homebuyers led by advantaged underwriting in insurance to protect the whole home As a leader in the home services software-as-a-service (“SaaS”) space, we’ve built deep relationships with approximately 30 thousand companies that are key to the home-buying transaction, such as home inspectors, mortgage companies, and title companies. In 2020, Porch Group rang the Nasdaq bell and began trading under the ticker symbol PRCH. We are looking to build a truly great company and are JUST GETTING STARTED. Job Title: Senior Software Engineer I Location: India Workplace Type: Remote Job Summary The future is bright for the Porch Group, and we’d love for you to be a part of it as our Senior Software Engineer. The ideal candidate will have a strong background in software development, a passion for solving complex problems, ideally supporting call center applications. What You’ll Do As A Senior Software Engineer Design and Development: Lead the design, development, and implementation of high-quality software solutions and communicating technical decisions through design documentation across two or three software teams Technical Leadership: Provide technical guidance and mentorship to junior engineers, ensuring best practices in software development. Code Review: Conduct code reviews to maintain code quality and consistency. Collaboration: Work closely with cross-functional teams including product managers, designers, and QA engineers to deliver robust software solutions and deliver critical features supporting our contact center applications and related technologies. Problem Solving: Analyze and resolve complex technical issues in a timely manner. Documentation: Create and maintain comprehensive technical documentation. Innovation: Stay updated with the latest industry trends and technologies to ensure our solutions remain cutting-edge. Agile Practices: Participate in agile development processes, including sprint planning, daily stand-ups, and retrospectives. What You Will Bring As A Senior Software Engineer Bachelor's or master's degree in computer science, engineering, or a related field (or four years of equivalent work experience) Experience: 6+ years of software development experience. Technical experience: Experience with JavaScript and its frameworks (React, Vue, Angular, etc.) Proficiency with SQL, preferably PostgreSQL Production JVM language experience, preferably Scala PostgreSQL expertise Development Tools: Experience with development tools such as Git, Jenkins, Docker, etc. Cloud Platforms: Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus. Database Management / CRM: Experience with relational and non-relational databases. Working knowledge of commercial CRM systems and integrations. Excellent communication skills, problem-solving abilities, and a collaborative mindset. Continuous Delivery and Integration experience. Experience with test-driven development (TDD) and automated testing frameworks. Experience working with real time systems with hundreds of concurrent users, preferably call center applications Proficiency with version control systems. Production Kubernetes experience. Proven experience working with US based business teams. Excellent written and communication skills in English language. Ability to work within core US business hours / time zone expectations - Eastern Standard Time (EST) overlap Workspace: A quiet space to work, an internet connection of at least 30 Mbps download | 10 Mbps upload The application window for this position is anticipated to close in 2 weeks (10 business days) from May 16, 2025. Please know this may change based on business and interviewing needs. What You Will Get As A Porch Group Team Member Our benefits package will provide you with comprehensive coverage for your health, life, and financial well-being. Our benefits include medical insurance, accident insurance and retiral benefits. Our wellness programs include 12 company-paid holidays, 2 flexible holidays, privilege/earned leave, casual/sick leave, paid maternity and paternity Leaves, and weekly wellness events. What’s next? Submit your application below and our Talent Acquisition team will be reviewing your application shortly! If your resume gets us intrigued, we will look to connect with you for a chat to learn more about your background, and then possibly invite you to have virtual interviews. What's important to call out is that we want to make sure not only that you're the right person for us, but also that we're the right next step for you, so come prepared with all the questions you have! Porch is committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives that encourage new ways of solving problems, so we strive to attract and develop talent from all backgrounds and create workplaces where everyone feels seen, heard and empowered to bring their full, authentic selves to work. Porch is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable laws, regulations, and ordinances. Show more Show less

Posted 6 days ago

Apply

360.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us: MUFG Bank, Ltd. is Japan’s premier bank, with a global network spanning in more than 40 markets. Outside of Japan, the bank offers an extensive scope of commercial and investment banking products and services to businesses, governments, and individuals worldwide. MUFG Bank’s parent, Mitsubishi UFJ Financial Group, Inc. (MUFG) is one of the world’s leading financial groups. Headquartered in Tokyo and with over 360 years of history, the Group has about 120,000 employees and offers services including commercial banking, trust banking, securities, credit cards, consumer finance, asset management, and leasing. The Group aims to be the world’s most trusted financial group through close collaboration among our operating companies and flexibly respond to all the financial needs of our customers, serving society, and fostering shared and sustainable growth for a better world. MUFG’s shares trade on the Tokyo, Nagoya, and New York stock exchanges. MUFG Global Service Private Limited: Established in 2020, MUFG Global Service Private Limited (MGS) is 100% subsidiary of MUFG having offices in Bengaluru and Mumbai. MGS India has been set up as a Global Capability Centre / Centre of Excellence to provide support services across various functions such as IT, KYC/ AML, Credit, Operations etc. to MUFG Bank offices globally. MGS India has plans to significantly ramp-up its growth over the next 18-24 months while servicing MUFG’s global network across Americas, EMEA and Asia Pacific. About the Role: Position Title: GFCD Data Analytics & Transaction Monitoring Tuning/Optimization, VP Corporate Title: Vice President Reporting to: Director - Global Transaction Monitoring Location: Bangalore Job Profile Position details: Purpose of Role: We are seeking a highly skilled and data-driven Senior Financial Crime Analytics Team Lead to join our Global Financial Crimes Division (GFCD) team. In this role, you will lead a team responsible for primarily using Actimize and Databricks to conduct advanced analytics to enhance our transaction monitoring (TM) and customer risk rating (CRR) capabilities. You will work at the intersection of data science, compliance, and technology to advise, build, and configure scalable solutions that protect the organization from financial crime. Main Responsibilities: Responsible for strategizing the team’s initiatives and determining measurable outcomes with GFCD management that align the MGS team’s goals with broader GFCD global program goals on an annual basis. Manage the team’s progress against project/initiative timelines and goals agreed upon with GFCD management. Coordinate with global stakeholders to conduct analytics supporting all regional financial crimes offices (Americas, EMEA, APAC, Japan) Oversee the design and implementation of TM and CRR tuning methodologies, including what-if scenario analysis, threshold optimization, and ATL/BTL sampling. Lead the end-user team using the full suite of Databricks capabilities to support GFCD’s goals related to analytics, tuning and optimization. Supervise exploratory data analysis (EDA) and communicate insights to stakeholders to support decision-making. Oversee and guide the team to analyze complex datasets, identify new methods to detect anomalies, and assist with the development of and the execution of a strategy to apply machine learning techniques for financial crime detection. Guide the development of sustainable data pipelines and robust ETL processes using Python, R, Scala, and SQL. Build and maintain utilities that support TM optimization. Ensure compliance with technical standards, data integrity, and security policies. Collaborate with centralized reporting, data governance, and operational teams to ensure alignment and efficiency. Skills and knowledge: Transaction Monitoring (Actimize): Experienced with Actimize for monitoring and analyzing transactions to identify suspicious activities and red flags indicative of money laundering, terrorism financing, and other financial crimes. Strong Technical Skills: Expertise in Python, Scala, and SQL, with familiarity with rules-based and machine learning models and model governance; ideally those relevant to transaction monitoring and sanctions screening. Proficiency in Databricks and Apache Spark: Skilled in developing scalable data pipelines and performing complex data analysis using Databricks and Apache Spark, with experience in Delta Lake for efficient data storage and real-time data streaming applications. Relevant Certifications: Databricks Certified Data Analyst Associate, Databricks Certified Machine Learning Associate, Databricks Certified Data Engineer Associate. Experience with transaction monitoring, sanctions screening, and financial crimes data sources. Excellent communication and presentation skills, with the ability to convey complex data insights to non-technical stakeholders. Job Requirements: Additional skills: Experience interfacing with banking regulators and enforcement staff Thorough understanding of an effective financial crimes risk management framework Demonstrated ability to manage multiple projects simultaneously The ability to interact effectively at all levels of the organization, including Bank staff, management, directors and prudential regulators Ability to work autonomously and initiate and prioritize own work Ability to work with teams of project managers Solid judgment, strong negotiating skills, and a practical approach to implementation – including knowledge of Bank systems Ability to balance regulatory requirements with the best interests of the Bank and its customers Ability to prepare analytical reports and visual representation of information. Ability to apply mathematical principles or statistical approaches where needed to solve problems. Education & professional qualifications: Bachelor’s degree in computer science, Information Systems, Information Technology, or related field. Experience: 15+ years of experience in financial crimes data analytics within the financial services industry. Equal Opportunity Employer: The MUFG Group is committed to providing equal employment opportunities to all applicants and employees and does not discriminate on the basis of race, colour, national origin, physical appearance, religion, gender expression, gender identity, sex, age, ancestry, marital status, disability, medical condition, sexual orientation, genetic information, or any other protected status of an individual or that individual's associates or relatives, or any other classification protected by the applicable laws. Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the Job The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. Roles and Responsibilities Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization Skills You Will Need 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Nice to have: HR services industry experience Experience in data science, including predictive modeling Experience leading teams across multiple geographies Show more Show less

Posted 6 days ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Pune

Work from Office

Naukri logo

Project description You'll be working in the GM Business Analytics team located in Pune. The successful candidate will be a member of the global Distribution team, which has team members in London and Pune. We work as part of a global team providing analytical solutions for IB distribution/sales people. Solutions deployed should be extensible globally with minimal localization. Responsibilities Are you passionate about data and analyticsAre you keen to be part of the journey to modernize a data warehouse/ analytics suite of application(s). Do you take pride in the quality of software delivered for each development iteration We're looking for someone like that to join us and be a part of a high-performing team on a high-profile project. solve challenging problems in an elegant way master state-of-the-art technologies build a highly responsive and fast updating application in an Agile & Lean environment apply best development practices and effectively utilize technologies work across the full delivery cycle to ensure high-quality delivery write high-quality code and adhere to coding standards work collaboratively with diverse team(s) of technologists You are: Curious and collaborative, comfortable working independently, as well as in a team Focused on delivery to the business Strong in analytical skills. For example, the candidate must understand the key dependencies among existing systems in terms of the flow of data among them. It is essential that the candidate learns to understand the 'big picture' of how IB industry/business functions. Able to quickly absorb new terminology and business requirements Already strong in analytical tools, technologies, platforms, etc. The candidate must also demonstrate a strong desire for learning and self-improvement. Open to learning home-grown technologies, support current state infrastructure and help drive future state migrations. imaginative and creative with newer technologies Able to accurately and pragmatically estimate the development effort required for specific objectives You will have the opportunity to work under minimal supervision to understand local and global system requirements, design and implement the required functionality/bug fixes/enhancements. You will be responsible for components that are developed across the whole team and deployed globally. You will also have the opportunity to provide third-line support to the application's global user community, which will include assisting dedicated support staff and liaising with the members of other development teams directly, some of which will be local and some remote. Skills Must have A bachelor's or master's degree, preferably in Information Technology or a related field (computer science, mathematics, etc.), focusing on data engineering. 5+ years of relevant experience as a data engineer in Big Data is required. Strong Knowledge of programming languages (Python / Scala) and Big Data technologies (Spark, Databricks or equivalent) is required. Strong experience in executing complex data analysis and running complex SQL/Spark queries. Strong experience in building complex data transformations in SQL/Spark. Strong knowledge of Database technologies is required. Strong knowledge of Azure Cloud is advantageous. Good understanding and experience with Agile methodologies and delivery. Strong communication skills with the ability to build partnerships with stakeholders. Strong analytical, data management and problem-solving skills. Nice to have Experience working on the QlikView tool Understanding of QlikView scripting and data model Other Languages EnglishC1 Advanced Seniority Senior

Posted 6 days ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Gurugram

Work from Office

Naukri logo

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role As a Senior Data Scientist, you are expected to develop and implement Artificial Intelligence based solutions across various disciplines for the Intelligent Industry vertical of Capgemini Invent. You are expected to work as an individual contributor or along with a team to help design and develop ML/NLP models as per the requirement. You will work closely with the Product Owner, Systems Architect and other key stakeholders right from conceptualization till the implementation of the project. You should take ownership while understanding the client requirement, the data to be used, security & privacy needs and the infrastructure to be used for the development and implementation. The candidate will be responsible for executing data science projects independently to deliver business outcomes and is expected to demonstrate domain expertise, develop, and execute program plans and proactively solicit feedback from stakeholders to identify improvement actions. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with stakeholders from different functional and business teams. The role also requires the candidate to collaborate on ML asset creation and eager to learn and impart trainings to fellow data science professionals. We expect thought leadership from the candidate, especially on proposing to build a ML/NLP asset based on expected industry requirements. Experience in building Industry specific (e.g. Manufacturing, R&D, Supply Chain, Life Sciences etc), production ready AI Models using microservices and web-services is a plus. Programming Languages Python NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud Azure/AWS/GCP Your Profile Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Model Deployment ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Required Qualifications Masters degree in a quantitative field such as Mathematics, Statistics, Machine Learning, Computer Science or Engineering or a bachelors degree with relevant experience. Good experience in programming with languages such as Python/Java/Scala, SQL and experience with data visualization tools like Tableau or Power BI. Preferred Experience Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. Presupposition High Impact client communication The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI.

Posted 6 days ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Navi Mumbai

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 6 days ago

Apply

3.0 - 6.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Hands-on experience with testing frameworks in line with Web App, Mobile, Web Services/APIs, Network & blockchain. Experience in both commercial and open source tools likeBurp Professional, Nmap, Kali, Metasploit, etc. Experience with Open Web Application Security Project (OWASP), Open Source Security Testing Methodology Manual (OSSTMM) methodologies and tools. Experience in preparing a security threat model and associated test plans. Experience in translating the complex security threats to simpler procedures for web application developers, systems administrators, and management to understand security testing results. In-depth knowledge of application development processes and at least one programing or scripting language (e.g., Java, Scala, C#, Ruby, Perl, Python, PowerShell) is preferred. Knowledge of current information security threats Primary Skills Certification on CEH (Certified Ethical Hacker). OSCP (Offensive Security Certified Professional) is desirable.

Posted 6 days ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Experience with Scala object-oriented/object function Strong SQL background. Experience in Spark SQL, Hive, Data Engineer. SQL Experience with data pipelines & Data Lake Strong background in distributed comp. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL Experience with data pipelines & Data Lake Strong background in distributed comp Experience with Scala object-oriented/object function Strong SQL background Preferred technical and professional experience Core Scala Development Experience

Posted 6 days ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 6 days ago

Apply

4.0 - 9.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title - Retail Specialized Data Scientist Level 9 SnC GN Data & AI Management Level:09 - Consultant Location:Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must have skills: A solid understanding of retail industry dynamics, including key performance indicators (KPIs) such as sales trends, customer segmentation, inventory turnover, and promotions. Strong ability to communicate complex data insights to non-technical stakeholders, including senior management, marketing, and operational teams. Meticulous in ensuring data quality, accuracy, and consistency when handling large, complex datasets. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Strong proficiency in Python for data manipulation, statistical analysis, and machine learning (libraries like Pandas, NumPy, Scikit-learn). Expertise in supervised and unsupervised learning algorithms Use advanced analytics to optimize pricing strategies based on market demand, competitor pricing, and customer price sensitivity. Good to have skills: Familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms such as AWS or Google Cloud for large-scale data processing. Experience with ETL (Extract, Transform, Load) processes and tools like Apache Airflow to automate data workflows. Familiarity with designing scalable and efficient data pipelines and architecture. Experience with tools like Tableau, Power BI, Matplotlib, and Seaborn to create meaningful visualizations that present data insights clearly. Job Summary : The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help our retail business make data-driven decisions. This individual will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Roles & Responsibilities: Leverage Retail Knowledge:Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. Use AI-driven techniques for personalization, demand forecasting, and fraud detection. Use advanced statistical methods help optimize existing use cases and build new products to serve new challenges and use cases. Stay updated on the latest trends in data science and retail technology. Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills : Strong analytical and statistical skills. Expertise in machine learning and AI. Experience with retail-specific datasets and KPIs. Proficiency in data visualization and reporting tools. Ability to work with large datasets and complex data structures. Strong communication skills to interact with both technical and non-technical stakeholders. A solid understanding of the retail business and consumer behavior. Programming Languages:Python, R, SQL, Scala Data Analysis Tools:Pandas, NumPy, Scikit-learn, TensorFlow, Keras Visualization Tools:Tableau, Power BI, Matplotlib, Seaborn Big Data Technologies:Hadoop, Spark, AWS, Google Cloud Databases:SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Qualification Experience: Minimum 3 year(s) of experience is required Educational Qualification: Bachelors or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field.

Posted 6 days ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Naukri logo

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 6 days ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Naukri logo

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 6 days ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Kochi

Work from Office

Naukri logo

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Big Data, Python or R Good to have skills:Scala, SQL Job Summary A Data Scientist is expected to be hands-on to deliver end to end vis a vis projects undertaken in the Analytics space. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Roles and Responsibilities Identify valuable data sources and collection processes Supervise preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns for insurance industry. Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Collaborate with engineering and product development teams Hands-on knowledge of implementing various AI algorithms and best-fit scenarios Has worked on Generative AI based implementations Professional and Technical Skills 3.5-5 years experience in Analytics systems/program delivery; at least 2 Big Data or Advanced Analytics project implementation experience Experience using statistical computer languages (R, Python, SQL, Pyspark, etc.) to manipulate data and draw insights from large data sets; familiarity with Scala, Java or C++ Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications Hands on experience in Azure/AWS analytics platform (3+ years) Experience using variations of Databricks or similar analytical applications in AWS/Azure Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Strong mathematical skills (e.g. statistics, algebra) Excellent communication and presentation skills Deploying data pipelines in production based on Continuous Delivery practices. Additional Information Multi Industry domain experience Expert in Python, Scala, SQL Knowledge of Tableau/Power BI or similar self-service visualization tools Interpersonal and Team skills should be top notch Nice to have leadership experience in the past Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 6 days ago

Apply

3.0 - 4.0 years

5 - 9 Lacs

Kochi

Work from Office

Naukri logo

Job Title - + + Management Level : Location:Kochi, Coimbatore, Trivandrum Must have skills:Python, Pyspark Good to have skills:Redshift Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles & Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe Professional & Technical Skills: Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 3-4 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:5-8 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 6 days ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Neo4j, Stardog Good to have skills : JavaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Neo4j.- Good To Have Skills: Experience with Java.- Strong understanding of data modeling and graph database concepts.- Experience with data integration tools and ETL processes.- Familiarity with data quality frameworks and best practices.- Proficient in programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Neo4j.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design and development of applications.- Act as the primary point of contact for application-related queries.- Collaborate with team members to ensure project success.- Provide technical guidance and mentorship to junior team members.- Stay updated on industry trends and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of big data processing and analytics.- Experience with data processing frameworks like Apache Spark.- Hands-on experience in building scalable data pipelines.- Knowledge of cloud platforms for data processing.- Experience in performance tuning and optimization. Additional Information:- The candidate should have a minimum of 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

15.0 - 25.0 years

10 - 14 Lacs

Gurugram

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : PySpark, Python (Programming Language)Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary As an Application Lead, you will be responsible for designing, building, and configuring applications. Acting as the primary point of contact, you will lead the development team, oversee the delivery process, and ensure successful project execution. Roles & ResponsibilitiesAct as a Subject Matter Expert (SME) in application developmentLead and manage a development team to achieve performance goalsMake key technical and architectural decisionsCollaborate with cross-functional teams and stakeholdersProvide technical solutions to complex problems across multiple teamsOversee the complete application development lifecycleGather and analyze requirements in coordination with stakeholdersEnsure timely and high-quality delivery of projects Professional & Technical SkillsMust-Have SkillsProficiency in Apache SparkStrong understanding of big data processingExperience with data streaming technologiesHands-on experience in building scalable, high-performance applicationsKnowledge of cloud computing platformsMust-Have Additional SkillsPySparkSpark SQL / SQLAWS Additional InformationThis is a full-time, on-site role based in GurugramCandidates must have a minimum of 5 years of hands-on experience with Apache SparkA minimum of 15 years of full-time formal education is mandatory Qualification 15 years full time education

Posted 6 days ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with stakeholders to gather and analyze data requirements.- Design and implement robust data pipelines to support data processing and analytics. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

12.0 - 15.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : Java Enterprise EditionMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and frameworks.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data warehousing concepts and technologies. Additional Information:- The candidate should have minimum 12 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

2.0 - 3.0 years

4 - 8 Lacs

Kochi

Work from Office

Naukri logo

Job Title - Data Engineer Sr.Analyst ACS SONG Management Level:Level 10 Sr. Analyst Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 6 days ago

Apply

Exploring Scala Jobs in India

Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.

Average Salary Range

The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead

As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.

Related Skills

In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts

Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.

Interview Questions

Here are 25 interview questions that you may encounter when applying for Scala roles:

  • What is Scala and why is it used? (basic)
  • Explain the difference between val and var in Scala. (basic)
  • What is pattern matching in Scala? (medium)
  • What are higher-order functions in Scala? (medium)
  • How does Scala support functional programming? (medium)
  • What is a case class in Scala? (basic)
  • Explain the concept of currying in Scala. (advanced)
  • What is the difference between map and flatMap in Scala? (medium)
  • How does Scala handle null values? (medium)
  • What is a trait in Scala and how is it different from an abstract class? (medium)
  • Explain the concept of implicits in Scala. (advanced)
  • What is the Akka toolkit and how is it used in Scala? (medium)
  • How does Scala handle concurrency? (advanced)
  • Explain the concept of lazy evaluation in Scala. (advanced)
  • What is the difference between List and Seq in Scala? (medium)
  • How does Scala handle exceptions? (medium)
  • What are Futures in Scala and how are they used for asynchronous programming? (advanced)
  • Explain the concept of type inference in Scala. (medium)
  • What is the difference between object and class in Scala? (basic)
  • How can you create a Singleton object in Scala? (basic)
  • What is a higher-kinded type in Scala? (advanced)
  • Explain the concept of for-comprehensions in Scala. (medium)
  • How does Scala support immutability? (medium)
  • What are the advantages of using Scala over Java? (basic)
  • How do you implement pattern matching in Scala? (medium)

Closing Remark

As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies