Jobs
Interviews

6093 Scala Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 4-6 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a member of the Google Cloud Consulting Professional Services team, you will have the opportunity to contribute to the success of businesses by guiding them through their cloud journey and leveraging Google's global network, data centers, and software infrastructure. Your role will involve assisting customers in transforming their businesses by utilizing technology to connect with customers, employees, and partners. Your responsibilities will include interacting with stakeholders to understand customer requirements and providing recommendations for solution architectures. You will collaborate with technical leads and partners to lead migration and modernization projects to Google Cloud Platform (GCP). Additionally, you will design, build, and operationalize data storage and processing infrastructure using Cloud native products, ensuring data quality and governance procedures are in place to maintain accuracy and reliability. In this role, you will work on data migrations, modernization projects, and design data processing systems optimized for scaling. You will troubleshoot platform/product tests, understand data governance and security controls, and travel to customer sites to deploy solutions and conduct workshops to educate and empower customers. Furthermore, you will be responsible for translating project requirements into goals and objectives, creating work breakdown structures to manage internal and external stakeholders effectively. You will collaborate with Product Management and Product Engineering teams to drive excellence in products and contribute to the digital transformation of organizations across various industries. By joining this team, you will play a crucial role in shaping the future of businesses of all sizes and assisting them in leveraging Google Cloud to accelerate their digital transformation journey.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Engineer at CLOUDSUFI, a Google Cloud Premier Partner, you will be responsible for designing, developing, and deploying graph database solutions using Neo4j for economic data analysis and modeling. Your expertise in graph database architecture, data pipeline development, and production system deployment will play a crucial role in this position. Your key responsibilities will include designing and implementing Neo4j graph database schemas for complex economic datasets, developing efficient graph data models, creating and optimizing Cypher queries, building graph-based data pipelines for real-time and batch processing, architecting scalable data ingestion frameworks, developing ETL/ELT processes, implementing data validation and monitoring systems, and building APIs and services for graph data access and manipulation. In addition, you will be involved in deploying and maintaining Neo4j clusters in production environments, implementing backup and disaster recovery solutions, monitoring database performance, optimizing queries, managing capacity planning, and establishing CI/CD pipelines for graph database deployments. You will also collaborate with economists and analysts to translate business requirements into graph solutions. To be successful in this role, you should have 5-10 years of experience, with a background in BTech / BE / MCA / MSc Computer Science. You should have expertise in Neo4j database development, graph modeling, Cypher Query Language, programming languages such as Python, Java, or Scala, data pipeline tools like Apache Kafka and Apache Spark, and cloud platforms like AWS, GCP, or Azure with containerization. Experience with graph database administration, performance tuning, distributed systems, database clustering, data warehousing concepts, and dimensional modeling will be beneficial. Knowledge of financial datasets, market data, economic indicators, data governance, and compliance in financial services is also desired. Preferred qualifications include Neo4j Certification, a Master's degree in Computer Science, Economics, or a related field, 5+ years of industry experience in financial services or economic research, and additional skills in machine learning on graphs, network analysis, and time-series analysis. You will work in a technical environment that includes Neo4j Enterprise Edition with APOC procedures, Apache Kafka, Apache Spark, Docker, Kubernetes, Git, Jenkins/GitLab CI, and monitoring tools like Prometheus, Grafana, and ELK stack. Your application should include a portfolio demonstrating Neo4j graph database projects, examples of production graph systems you've built, and experience with economic or financial data modeling.,

Posted 1 week ago

Apply

2.0 - 6.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer II / Senior Data Engineer Job Location : Bengaluru, Pune - India Job Summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities: Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Step Functions. Collaborate with cross-functional teams to gather requirements and design solutions for complex data engineering projects. Develop ETL/ELT pipelines using Python scripts and SQL queries to extract insights from structured and unstructured data sources. Implement web scraping techniques to collect relevant data from various websites and APIs. Ensure high availability of the system by implementing monitoring tools like CloudWatch. Desired Profile: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to clients" most intricate digital transformation requirements. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we assist clients in achieving their most ambitious goals and establishing sustainable businesses that are future-ready. Our workforce of over 230,000 employees and business partners spread across 65 countries ensures that we fulfill our commitment to helping customers, colleagues, and communities thrive amidst a constantly changing world. As a Databricks Developer at Wipro, you will be expected to possess the following essential skills: - Cloud certification in Azure Data Engineer or related category - Proficiency in Azure Data Factory, Azure Databricks Spark (PySpark or Scala), SQL, Data Ingestion, and Curation - Experience in Semantic Modelling and Optimizing data models to function within Rahona - Familiarity with Azure data ingestion from on-prem sources such as mainframe, SQL server, and Oracle - Proficiency in Sqoop and Hadoop - Ability to use Microsoft Excel for metadata files containing ingestion requirements - Any additional certification in Azure/AWS/GCP and hands-on experience in cloud data engineering - Strong programming skills in Python, Scala, or Java This position is available in multiple locations including Pune, Bangalore, Coimbatore, and Chennai. The mandatory skill set required for this role is DataBricks - Data Engineering. The ideal candidate should have 5-8 years of experience in the field. At Wipro, we are in the process of building a modern organization that is committed to digital transformation. We are seeking individuals who are driven by the concept of reinvention - of themselves, their careers, and their skills. We encourage a culture of continuous evolution within our business and industry, adapting to the changing world around us. Join us in a purpose-driven environment that empowers you to craft your own reinvention. Realize your ambitions at Wipro, where applications from individuals with disabilities are highly encouraged.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a part of ZS, you will have the opportunity to work in a place driven by passion that aims to change lives. ZS is a management consulting and technology firm that is dedicated to enhancing life and its quality. The core strength of ZS lies in its people, who work collectively to develop transformative solutions for patients, caregivers, and consumers worldwide. By adopting a client-first approach, ZS employees bring impactful results to every engagement by partnering closely with clients to design custom solutions and technological products that drive value and yield positive outcomes in key areas of their business. Your role at ZS will require you to bring inquisitiveness for learning, innovative ideas, courage, and dedication to make a life-changing impact. At ZS, the individuals are highly valued, recognizing both the visible and invisible facets of their identities, personal experiences, and belief systems. These elements shape the uniqueness of each individual and contribute to the diverse tapestry within ZS. ZS acknowledges and celebrates personal interests, identities, and the thirst for knowledge as integral components of success within the organization. Learn more about the diversity, equity, and inclusion initiatives at ZS, along with the networks that support ZS employees in fostering community spaces, accessing necessary resources for growth, and amplifying the messages they are passionate about. As an Architecture & Engineering Specialist specializing in ML Engineering at ZS's India Capability & Expertise Center (CEC), you will be part of a team that constitutes over 60% of ZS employees across three offices in New Delhi, Pune, and Bengaluru. The CEC plays a pivotal role in collaborating with colleagues from North America, Europe, and East Asia to deliver practical solutions to clients that drive the company's operations. Upholding standards of analytical, operational, and technological excellence, the CEC leverages collective knowledge to enable ZS teams to achieve superior outcomes for clients. Joining ZS's Scaled AI practice within the Architecture & Engineering Expertise Center will immerse you in a dynamic ecosystem focused on generating continuous business value for clients through innovative machine learning, deep learning, and engineering capabilities. In this role, you will collaborate with data scientists to craft cutting-edge AI models, develop and utilize advanced ML platforms, establish and implement sophisticated ML pipelines, and oversee the entire ML lifecycle. **Responsibilities:** - Design and implement technical features using best practices for the relevant technology stack - Collaborate with client-facing teams to grasp the solution context, contribute to technical requirement gathering and analysis - Work alongside technical architects to validate design and implementation strategies - Write production-ready code that is easily testable, comprehensible to other developers, and addresses edge cases and errors - Ensure top-notch quality deliverables by adhering to architecture/design guidelines, coding best practices, and engaging in periodic design/code reviews - Develop unit tests and higher-level tests to handle expected edge cases, errors, and optimal scenarios - Utilize bug tracking, code review, version control, and other tools for organizing and delivering work - Participate in scrum calls, agile ceremonies, and effectively communicate progress, issues, and dependencies - Contribute consistently by researching and evaluating the latest technologies, conducting proofs-of-concept, and creating prototype solutions - Aid the project architect in designing modules/components of the overall project/product architecture - Break down large features into estimable tasks, lead estimation, and defend them with clients - Independently implement complex features with minimal guidance, such as service or application-wide changes - Systematically troubleshoot code issues/bugs using stack traces, logs, monitoring tools, and other resources - Conduct code/script reviews of senior engineers within the team - Mentor and cultivate technical talent within the team **Requirements:** - Minimum 5+ years of hands-on experience in deploying and productionizing ML models at scale - Proficiency in scaling GenAI or similar applications to accommodate high user traffic, large datasets, and reduce response time - Strong expertise in developing RAG-based pipelines using frameworks like LangChain & LlamaIndex - Experience in crafting GenAI applications such as answering engines, extraction components, and content authoring - Expertise in designing, configuring, and utilizing ML Engineering platforms like Sagemaker, MLFlow, Kubeflow, or other relevant platforms - Familiarity with Big data technologies including Hive, Spark, Hadoop, and queuing systems like Apache Kafka/Rabbit MQ/AWS Kinesis - Ability to quickly adapt to new technologies, innovate in solution creation, and independently conduct POCs on emerging technologies - Proficiency in at least one Programming language such as PySpark, Python, Java, Scala, etc., and solid foundations in Data Structures - Hands-on experience in building metadata-driven, reusable design patterns for data pipeline, orchestration, and ingestion patterns (batch, real-time) - Experience in designing and implementing solutions on distributed computing and cloud services platforms (e.g., AWS, Azure, GCP) - Hands-on experience in constructing CI/CD pipelines and awareness of application monitoring practices **Additional Skills:** - AWS/Azure Solutions Architect certification with a comprehensive understanding of the broader AWS/Azure stack - Knowledge of DevOps CI/CD, data security, and experience in designing on cloud platforms - Willingness to travel to global offices as required to collaborate with clients or internal project teams **Perks & Benefits:** ZS provides a holistic total rewards package encompassing health and well-being, financial planning, annual leave, personal growth, and professional development. The organization offers robust skills development programs, various career progression options, internal mobility paths, and a collaborative culture that empowers individuals to thrive both independently and as part of a global team. ZS is committed to fostering a flexible and connected work environment that enables employees to combine work from home and on-site presence at clients/ZS offices for the majority of the week. This approach allows for the seamless integration of the ZS culture and innovative practices through planned and spontaneous face-to-face interactions. **Travel:** Travel is an essential aspect of working at ZS, especially for client-facing roles. Business needs dictate the priority for travel, and while some projects may be local, all client-facing employees should be prepared to travel as required. Travel opportunities provide avenues to strengthen client relationships, gain diverse experiences, and enhance professional growth through exposure to different environments and cultures. **Application Process:** Candidates must either possess or be able to obtain work authorization for their intended country of employment. To be considered, applicants must submit an online application, including a complete set of transcripts (official or unofficial). *Note: NO AGENCY CALLS, PLEASE.* For more information, visit [ZS Website](www.zs.com).,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

pune, maharashtra

On-site

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you'll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers, and consumers worldwide. ZSers drive impact by bringing a client-first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning, bold ideas, courage, and passion to drive life-changing impact to ZS. Our most valuable asset is our people. At ZS, we honor the visible and invisible elements of our identities, personal experiences, and belief systemsthe ones that comprise us as individuals, shape who we are, and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. As a Senior Cloud Site Reliability Engineer at ZS, you will be part of the CCoE (Cloud Center of Excellence) Team. This team builds, maintains, and helps architect the systems enabling ZS client-facing software solutions. The CCoE team defines and implements best practices to ensure performant, resilient, and secure cloud solutions. The team comprises analytical problem solvers from diverse backgrounds who share a passion for quality delivery, whether the customer is a client or another ZS employee. The team has a presence in ZS's Evanston, Illinois, and Pune, India offices. **What You'll Do:** Acting as a Senior Cloud Site Reliability Engineer, you will work with a team of operations engineers and software developers to analyze, maintain, and nurture our Cloud solutions/products to support the ever-growing company's clientele. As a technical expert, you will closely collaborate with various teams to ensure the stability of the environment by: - Analyzing the current state, designing appropriate solutions, and working with the team to implement them. - Coordinating emergency responses, performing root cause analysis, identifying and implementing solutions to prevent re-occurrences. - Working with the team to identify ways to increase MTBF and lower MTTR for the environment. - Reviewing the entire application stack and executing initiatives to reduce failures, defects, and issues with the overall performance. - Identifying and working with the team to implement more efficient system procedures. - Maintaining environment monitoring systems to provide the best visibility into the state of the deployed products/solutions. - Performing root cause analysis on incoming infrastructure alerts and working with teams to resolve them. - Maintaining performance analysis tools, identifying any adverse changes to performance and working with the teams to resolve them. - Researching industry trends and technologies and promoting adoption of best-in-class tools and technologies. - Taking the initiative to advance the quality, performance, or scalability of our Cloud Solutions by influencing the architecture or design of our products. - Designing, developing, and executing automated tests to validate solutions and environments. - Troubleshooting issues across the entire stack - infrastructure, software, application, and network. **What You'll Bring:** - 3+ years experience working as a Site Reliability Engineer or an equivalent position. - 2+ years experience with AWS cloud technologies and at least one AWS certification (Solution Architect / DevOps Engineer) is required. - 1+ years experience functioning as a senior member in an infrastructure/software team. - Hands-on experience with AWS services like EC2, RDS, EMR, CloudFront, ELB, API Gateway, CodeBuild, AWS Config, Systems Manager, Service Catalog, Lambda, etc. - Full-stack IT experience with *nix, Windows, network/firewall concepts, source control (BitBucket), and build/dependency management and continuous integration systems (TeamCity, Jenkins). - Expertise in at least one scripting language, Python preferred. - Firm understanding of application reliability, performance tuning, and scalability. - Exposure to big data technologies (Spark, Hadoop, Scala, etc.) stack is preferred. - Solid knowledge of infrastructure and cloud-native services along with network technologies. - Solid understanding of RDBMS and Cloud Database engines like Postgres SQL, MySQL, etc. - Firm understanding of Clusters, Load balancers, and CDN. - Experience in fault-tolerant system design. - Familiarity with Splunk data analysis, Datadog, or similar tools is a plus. - A Bachelor's degree (Master's preferred) in a related technical field. - Excellent analytical, troubleshooting, and communication skills. - Strong verbal, written, and team presentation communication skills. Fluency in English is required. - Initiative and the ability to remain flexible and responsive in a dynamic environment. - Ability to quickly learn new platforms, languages, tools, and techniques as needed to meet project requirements. **Perks & Benefits:** ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth, and professional development. Our robust skills development programs, multiple career progression options, internal mobility paths, and collaborative culture empower you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. **Travel:** Travel is a requirement at ZS for client-facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. **To Complete Your Application:** Candidates must possess or be able to obtain work authorization for their intended country of employment. An online application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III - Big Data/Java/Scala at JPMorgan Chase within the Liquidity Risk (LRI) team, you will design and implement the next generation build out of a cloud native liquidity risk management platform for JPMC. The Liquidity Risk technology organization aims to provide comprehensive solutions to managing the firm's liquidity risk and to meet our regulatory reporting obligations across 50+ markets. The program will include the strategic build out of advanced liquidity calculation engines, incorporate AI and ML into our liquidity risk processes, and bring digital-first reporting capabilities. The target platform must process 40-60 million transactions and positions daily, calculate risk presented by the current actual as well as model-based what-if state of the market, build a multidimensional picture of the corporate risk profile, and provide the ability to analyze it in real time. Job Responsibilities: Executes standard software solutions, design, development, and technical troubleshooting. Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation. Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development. Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems. Adds to team culture of diversity, equity, inclusion, and respect. Contributes to team drive for continual improvement of development process and innovative solutions to meet business needs. Applies appropriate dedication to support the business goals through technology solutions. Required Qualifications, Capabilities, and Skills: Formal training or certification on software engineering concepts and 2+ years applied experience. Hands-on development experience and in-depth knowledge of Java, Scala, Spark, Bigdata related technologies. Hands-on practical experience in system design, application development, testing, and operational stability. Experience in cloud technologies (AWS). Experience across the whole Software Development Life Cycle. Experience to agile methodologies such as CI/CD, Applicant Resiliency, and Security. Emerging knowledge of software applications and technical processes within a technical discipline. Ability to work closely with stakeholders to define requirements. Interacting with partners across feature teams to collaborate on reusable services to meet solution requirements. Preferred Qualifications, Capabilities, and Skills: Experience of working in big data solutions with evidence of ability to analyze data to drive solutions. Exposure to complex computing using JVM and Big data. Ability to find the issue and optimize an existing workflow.,

Posted 1 week ago

Apply

6.0 - 12.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You will be responsible for leveraging your 6-12 years of experience in Data Warehouse and Big Data technologies to contribute to our team in Trivandrum. Your expertise in Programming Languages such as Scala, Spark, PySpark, Python, and SQL, along with Big Data Technologies like Hadoop, Hive, Pig, and MapReduce will be crucial for this role. Additionally, your proficiency in ETL & Data Engineering including Data Warehouse Design, ETL, Data Analytics, Data Mining, and Data Cleansing will be highly valued. As a part of our team, you will be expected to have hands-on experience with Cloud Platforms like GCP and Azure, as well as tools & frameworks such as Apache Hadoop, Airflow, Kubernetes, and Containers. Your skills in data pipeline creation, optimization, troubleshooting, and data validation will play a key role in ensuring the efficiency and accuracy of our data processes. Ideally, you should have at least 4 years of experience working with Scala, Spark, PySpark, Python, and SQL, in addition to 3+ years of strategic data planning, governance, and standard procedures. Experience in Agile environments and a good understanding of Java, ReactJS, and Node.js will be beneficial for this role. Moreover, your ability to work with data analytics, machine learning, and optimization will be advantageous. Knowledge of managing big data workloads, containerized environments, and experience in analyzing large datasets to optimize data workflows will further strengthen your profile for this position. UST is a global digital transformation solutions provider with a track record of working with some of the world's best companies for over 20 years. With a team of over 30,000 employees in 30 countries, we are committed to making a real impact through transformation. If you are passionate about innovation, agility, and driving positive change through technology, we invite you to join us on this journey of creating boundless impact and touching billions of lives in the process.,

Posted 1 week ago

Apply

15.0 - 19.0 years

0 Lacs

karnataka

On-site

Publicis Sapient is seeking a Principal Data Scientist to join its Data Science practice. In this role, you will act as a trusted advisor to clients, driving innovation in applied machine learning and statistical analysis. You will also lead efforts to enhance the group's capabilities for the future. Your responsibilities will include leading teams to develop data-driven solutions driven by learning algorithms, educating teams on problem-solving models in machine learning, and translating objectives into data-driven solutions. You will work with diverse data sets, cutting-edge technology, and witness your insights translating into tangible business results regularly. Your role will play a crucial part in integrating machine learning into core market offerings such as eCommerce, advertising, AdTech, and business transformation. You will also direct analyses to enhance the effectiveness of marketing tactics and collaborate with leaders in various Publicis Sapient divisions to ensure data-driven solutions are brought to the market. Key areas of focus will be customer segmentations, media and advertising optimization, recommender systems, fraud analytics, personalization, and forecasting. Your Impact: - Design and implement analytical models to support product and project objectives. - Research and innovate to develop next-generation solutions in digital marketing and customer experience. - Provide technical leadership and mentorship in data science. - Enhance machine learning operations platform for Generative AI in various industries. - Drive the application of machine learning in existing project disciplines. - Design experiments to measure changes in user experience. - Segment customers and markets for targeted messaging. - Direct research on analytics platforms to guide solutions. - Ensure solution and code quality through design and code reviews. - Establish standards in machine learning and statistical analysis for consistency and efficiency. - Assess client needs to adopt appropriate approaches for solving challenges. Qualifications: Your Skills & Experience: - Ph.D. in Computer Science, Math, Physics, Engineering, Statistics, or related field. - 15+ years of experience applying statistical learning methods in eCommerce and AdTech. - Proficiency in Gen AI tools and frameworks, LLM finetuning, and LLM ops. - Strong understanding of regression, classification, and cluster analysis approaches. - Proficiency in statistical programming with R, SAS, SPSS, MATLAB, or Python. - Expertise in Python, R, Scala, and SQL programming languages. Benefits of Working Here: - Access to regional benefits. - Gender-Neutral Policy. - 18 paid holidays per year. - Generous parental leave and new parent transition program. - Flexible work arrangements. - Employee Assistance Programs for wellness. Publicis Sapient is a digital transformation partner that helps organizations transition to a digitally enabled state. With a focus on strategy, consulting, customer experience, and agile engineering, the company aims to accelerate clients" businesses by designing products and services that customers truly value. Join a team of over 20,000 people worldwide who are united by core values and a purpose of helping people thrive in the pursuit of the next. Ideal candidates for this role will have experience in traditional AI and recent experience in Gen AI and Agentic AI. Organization, adaptability to change, and hands-on coding skills are highly valued.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Snowflake Data Engineer looking to support a Proof of Concept (POC) project and potentially scale it into a broader production environment. This is an exciting opportunity for you to contribute to shaping a modern data platform with real-time streaming capabilities using Snowflake's native features. Your key responsibilities will include leading or contributing to a POC for Snowflake-based data solutions, designing and developing real-time data ingestion pipelines using Snowpipe/Streams & Tasks, handling and processing streaming data within Snowflake, writing and optimizing complex SQL queries, stored procedures, and UDFs for data transformation, working with cloud platforms (AWS, Azure, GCP) to manage data ingestion and compute resources, collaborating with stakeholders to translate business needs into data solutions, proactively identifying improvements in data architecture, performance, and process automation, and contributing to building and mentoring a scalable data engineering team. You should have at least 4+ years of experience in data engineering, hands-on work with Snowflake development, performance tuning, and architecture, strong proficiency in SQL including analytical functions, CTEs, and complex logic, experience with Snowpipe, Streams, and Tasks, familiarity with streaming platforms like Kafka, Pulsar, or Confluent, experience with Python or Scala for scripting or data manipulation, comfort working in cloud-native environments (AWS, Azure, GCP), experience with ETL/ELT development, orchestration, and monitoring tools, and Git or version control experience. Preferred qualifications include prior experience delivering POC projects, knowledge of CI/CD practices for data pipeline deployment, understanding of data security, role-based access control, and governance in Snowflake, and strong communication skills with a collaborative mindset. If you are a self-starter who can take initiative, contribute ideas, and help seed a high-performing team, this role at Conglomerate IT could be the right fit for you.,

Posted 1 week ago

Apply

2.0 - 4.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Primary Skills - Data Engineering Secondary Skills - SQL & Python Education - Bachelors or Masters degree in Computer Science, IT, or related Experience Range - 2 TO 4 YEARS exp on data engineering proficient in PuSpark or similar data-focused Role, experience in data/web scraping, experience with ETL tools, expertise in SQL, python, Scala, Java, experience in Cloud based platform, AWS, Google Cloud, Azure, experience in Data warehousing concepts, Familiar with version control systems like Git. Domain - IT Start Date - Immediate Duration of the Project - 6 months contract (extendable) Shift Timing - Regular Shift CTC - As per Industry Number of Interviews - L1 & 2 Client Interview+HR Location - Bangalore No. of Positions - 4 Job description - Design, develop and maintain scalable ETL/ELT pipelines, maintain web crawlers, manage data base SQL/NoSQL, implement and maintain data engineering, provide solutions that help scale operations. Documents Mandatory - Form 16, Salary slip, Aadahar, Pancard, Academic Documents, offer letter, Experience Letter all to be submitted after selection Note: Immediate joiners are welcome

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. > Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. > Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations. > Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. > Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions. > Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows. > Write clean, functional, and optimized code: Adhering to coding standards and best practices. > Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications. > Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes. > Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies. Proficiency in: Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala). Spark (Scala, Python) for data processing and analysis. Kafka for real-time data ingestion and processing. ETL processes and data ingestion tools Deep hands-on expertise in Pyspark, Scala, Kafka Programming Languages: Scala, Python, or Java for developing Spark applications. SQL for data querying and analysis. Other Skills: Data warehousing concepts. Linux/Unix operating systems. Problem-solving and analytical skills. Version control systems ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture.The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities ETL Development – The CDP ETL C Database Engineer will be responsible for building pipelines to feed downstream data They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. Implementations s Onboarding – Will work with the team to onboard new clients onto the ZMP/CDP+ The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests– The CDP ETL C Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management – The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration s Process Improvement – The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements The CDP ETL & Database Engineer will be well versed in the following areas: Relational data modeling. ETL and FTP concepts. Advanced Analytics using SQL Functions. Cloud technologies - AWS, Snowflake. Able to decipher requirements, provide recommendations, and implement solutions within predefined. The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management. When required, collaborate with the Business Solutions Analyst (BSA) to solidify. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow , and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives. Required Skills ETL – ETL tools such as Talend (Preferred, not required) DMExpress – Nice to have Informatica – Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL – Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages – Can demonstrate knowledge of any of the PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS – Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value Working knowledge of Code Repositories such as GIT, Win CVS, Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira Minimum Qualifications Bachelor's degree or equivalent. 4+ Years' experience. Excellent verbal C written communications skills. Self-Starter, highly motivated. Analytical mindset. Company Summary Zeta Global is a NYSE listed data-powered marketing technology company with a heritage of innovation and industry leadership. Founded in 2007 by entrepreneur David A. Steinberg and John Sculley, former CEO of Apple Inc and Pepsi-Cola, the Company combines the industry’s 3rd largest proprietary data set (2.4B+ identities) with Artificial Intelligence to unlock consumer intent, personalize experiences and help our clients drive business growth. Our technology runs on the Zeta Marketing Platform, which powers ‘end to end’ marketing programs for some of the world’s leading brands. With expertise encompassing all digital marketing channels – Email, Display, Social, Search and Mobile – Zeta orchestrates acquisition and engagement programs that deliver results that are scalable, repeatable and sustainable. Zeta Global is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, gender, ancestry, color, religion, sex, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law. Zeta Global Recognized in Enterprise Marketing Software and Cross-Channel Campaign Management Reports by Independent Research Firm https://www.forbes.com/sites/shelleykohan/2024/06/1G/amazon-partners-with-zeta-global-to-deliver- gen-ai-marketing-automation/ https://www.cnbc.com/video/2024/05/06/zeta-global-ceo-david-steinberg-talks-ai-in-focus-at-milken- conference.html https://www.businesswire.com/news/home/20240G04622808/en/Zeta-Increases-3Q%E2%80%GG24- Guidance https://www.prnewswire.com/news-releases/zeta-global-opens-ai--data-labs-in-san-francisco-and-nyc- 300S45353.html https://www.prnewswire.com/news-releases/zeta-global-recognized-in-enterprise-marketing-software-and- cross-channel-campaign-management-reports-by-independent-research-firm-300S38241.html

Posted 1 week ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

Noida

Remote

Must Have: Minimum 2 years of experience in developing Java applications Experience with Spring Boot/Spring Experience with Microservices development Professional, precise communication skills Experience in REST API development Experience with MySQL/PostgreSQL Experience in troubleshooting and resolving issues in existing applications. Nice to Have: Knowledge of web application development in SCALA Knowledge of Play Framework Hands-on experience with AWS/Azure/GC Knowledge of HTML/CSS/JS & Typescript

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design and develop scalable systems for processing unstructured data into actionable insights using Python, Flask, and Azure Cognitive Services Integrate Optical Character Recognition (OCR), Speech-to-Text, and NLP models into workflows to handle various file formats such as PDFs, images, audio files, and text documents Implement robust error-handling mechanisms, multithreaded architectures, and RESTful APIs to ensure seamless user experiences. Utilize Azure OpenAI, Azure Speech SDK, and Azure Form Recognizer to create AI-powered solutions tailored to meet complex business requirements Collaborate with cross-functional teams to drive innovation and implement analytics workflows and ML models to enhance business processes and decision-making Ensure the accuracy, efficiency, and scalability of systems focusing on healthcare claims processing, document digitization, and data extraction Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of relevant experience in AI/ML engineering and cognitive automation Proven experience as an AI/ML Engineer, Software Engineer, Data Analyst, or a similar role in the tech industry Extensive experience with Azure Cognitive Services and other AI technologies SQL, Python, PySpark, Scala experience Proficient in developing and deploying machine learning models and handling large data sets Proven solid programming skills in Python and familiarity with Flask web framework Proven excellent problem-solving skills and the ability to work in a fast-paced environment Proven solid communication and collaboration skills, capable of working effectively with cross-functional teams. Demonstrated ability to implement robust ETL or ELT workflows for structured and unstructured data ingestion, transformation, and storage Preferred Qualification Experience in healthcare industries Skills Python Programming and SQL Data Analytics and Machine Learning Classification and Unsupervised Learning Regression and NLP Cloud and DevOps Foundations Data Visualization and Reporting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase

Posted 1 week ago

Apply

6.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Job Summary We are seeking a Senior Data Engineer to join our growing data team, where you will help build and scale the data infrastructure powering analytics, machine learning, and product innovation. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing robust, scalable, and secure data pipelines and platforms. You will work closely with data scientists, software engineers, and product teams to deliver clean, reliable data for critical business and clinical applications. Key Responsibilities: Design, implement, and optimize complex data pipelines using advanced SQL, ETL tools, and integration technologies. Collaborate with cross-functional teams to implement optimal data solutions for advanced analytics and data science initiatives. Spearhead process improvements, including automation, data delivery optimization, and infrastructure redesign for scalability. Evaluate and recommend emerging data technologies to build comprehensive data integration strategies. Lead technical discovery processes, defining complex requirements and mapping out detailed scenarios. • Develop and maintain data governance policies and procedures. What Youll Need to Be Successful (Required Skills): 5 -7 years of experience in data engineering or related roles. Advanced proficiency in multiple programming languages (e.g., Python, Java, Scala) and expert-level SQL knowledge. Extensive experience with big data technologies (Hadoop ecosystem, Spark, Kafka) and cloudbased environments (Azure, AWS, or GCP). Proven experience in designing and implementing large-scale data warehousing solutions. Deep understanding of data modeling techniques and enterprise-grade ETL tools. • Demonstrated ability to solve complex analytical problems. Education/ Certifications: Bachelor's degree in computer science, Information Management or related field . Preferred Skills: Experience in the healthcare industry, including clinical, financial, and operational data. Knowledge of machine learning and AI technologies and their data requirements. Familiarity with data visualization tools and real-time data processing. Understanding data privacy regulations and experience implementing compliant solutions Note: We work 5days from Office - India regular shift. Netsmart, India has setup our new Global Capability Centre(GCC) at Godrej Centre, Byatarayanapura (Hebbal area) -(https://maps.app.goo.gl/RviymAeGSvKZESSo6) .

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Andhra Pradesh, India

On-site

Job Description / Responsibilities - 5-7 years of experience in Big Data stacks: Spark/Scala/Hive/Impala/Hadoop Strong Expertise in Scala The resource should have good hands-on experience in Scala programming language . Should be able to model the given problem statement using Object Oriented programming concepts. Should have the basic understanding of the Spark in-memory processing framework and the concept of map tasks and reduce tasks. Should have hands-on experience on data processing projects. Should be able to frame sqls and analyze data based on the given requirements Advanced SQL knowledge Git hub or bit bucket Primary Skill Spark Scala. The resource should have good hands-on experience in Scala programming language Secondary Skill SQL, Python, Hive, Impala, AWS

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Hyderabad

Work from Office

Job Title: SAP CPI Job Location : Bangalore / Chennai / Hyderabad / Pune/DelhiNCR Work Mode : Work From Office (5 Days) One of our esteemed clients, a CMM Level 5 organization, is planning to fill over 1000 positions for SAP All Modules (MM with WM /Basis/SD /Basis HANA). This is an incredible chance for you to take your career to new heights. SAP PP/QM/ SAP PS/ SAP CPI/ SAP MDG/ SAP BODS/ Open Text/ S/4 SAP Fiori/ S/4 SAP FICO/ S/4 SAP ABAP/ SAP Basis Admin/ SAP Basis with S4/HANA/ SAP ABAP BW/ SAP BOBJ Admin/ SAP Basis/ BOBJ Admin/SAP PP/BODS/WM/Data Migration/SAP Security/ITGC Control and Audit/Abinitio / Desktop Support /ServiceDesk/MuleSoft/SOC/Cyber security/Scala /Sailpoint Allow us to provide you with more details: 1. Payroll and Location: If selected, you will be working on the payroll of our organization, Diverse Lynx India Pvt. Ltd., and stationed at our clients office located in Hyderabad. (WFO is Mandatory) 2. Interview Process: Once your profile has been shortlisted by our technical panel, we will promptly arrange a face-to-face interview/Virtual Interview for you. Rest assured that we will keep you informed every step of the way. Kindly help us with the interview slot date & timing so that we can line up the project team for technical evaluation. 3. Selection Confirmation: We understand how important it is to receive timely updates during the hiring process. Therefore, we are committed to providing confirmation of your selection on the same day as your interview. This is an excellent opportunity for professionals like yourself who are seeking growth and development in the SAP field. Must have Implementation/Support experience with minimum 4 Years of total experience. To book your interview slots; please help with the below details and you can connect with our Head directly for any further information & support. Experience- 4 Years-10 Years. -Full Name (As per PAN Card) -PAN Card No: -Date of Birth- DD/MM/YEAR/- -Total Experience - -Relevant Experience in the given Skill set - -Highest Qualification and Passing Year with Date 10th Onwards: - -Current Location - -Preferred Location - -Current Company (Parent Company) -Contact No. - -Mail Id - -Notice Period - -Reason for Change - -Current CTC -Expected CTC Best Regards Manish Prasad Account Manager(Human Resource-Recruitment) Diverse Lynx India Pvt. Ltd. Email ID:- manish.prasad@diverselynx.in URL: http://www.diverselynx.in

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql

Posted 1 week ago

Apply

2.0 - 6.0 years

11 - 15 Lacs

Kochi, Chennai, Thiruvananthapuram

Work from Office

Techversant is seeking a highly skilled and experienced AI/ML Lead to lead the design and implementation of our artificial intelligence and machine learning solutions. The successful candidate will work closely with cross-functional teams to understand business requirements and develop scalable, efficient, and robust AI/ML systems. Job Description As a AI/ML Lead, you will need to build Solutions based on Deep learning, Reinforcement learning, Computer vision, Expert system, Transfer Learning, NLP, and generative models. To Define, design, and deliver ML architecture patterns operable in native and hybrid cloud architectures. Implement machine learning algorithms in services and pipelines that can be used on a web scale. Create demos and proofs of concept, develop AI/ML based products and services. Creating Functional and technical specifications for AI ML solutions. Follow SDLC process. Advanced analytical knowledge of data and data conditioning. Programming advanced computing and developing algorithms. Developing software, data models and executing predictive analysis. Design, develop, and implement generative AI models using state-of-the-art techniques. Collaborate with cross-functional teams to define project goals, research requirements and develop innovative solutions. Strong proficiency in Python/R/Scala (Python is a must and R, Scala is a plus). Strong proficiency in SQL, NO SQL Databases. Experience in implementing and deploying AI\ Machine Learning solutions (using various models, such as CNN, RNN, Fuzzy logic, Q learning, SVM, Ensemble, Logistic Regression, Random Forest etc.) Specializes in at least one of the AI/ML stack, Frameworks, and tools like MxNET and Tensorflow. Hands-on experience with data analytics and classical machine learning, deep learning tools (e.g. Pandas, NumPy, Scikit-learn) and deep learning frameworks (e.g. Tensorflow, Pytorch). What will make you stand out Experience in production software engineering routines in DevOps/MLOps (e.g.Continuous Code Integration and Deployment). Experiences in cloud-based solutions (e.g. AWS, Azure, GCP). B.Tech/M.Tech/Ph.D CS/IS/IT or Msc Statistics, MCA Excellent career growth opportunities and exposure to multiple technologies. Fixed weekday day schedule, meaning, you ll have your weekends off! Unique leave benefits and encashment options based on performance. Fun family environment surrounded by experienced developers. Various internal employee rewards programs based on Performance. Opportunities for various other Bonus programs for training hours taken, certifications, special value to business through idea and innovation. Work life Balance flexible work timings, early out Fridays, various social and cultural activities etc.

Posted 1 week ago

Apply

3.0 - 6.0 years

5 - 6 Lacs

Kochi, Chennai, Thiruvananthapuram

Work from Office

Techversant is seeking experienced Data Scientist Engineers who will be responsible for developing and driving new business opportunities internationally. The incumbent will be responsible for discovering sales opportunities and creating qualified leads. Job Description Key Responsibilities Data mining or extracting usable data from valuable data sources Using machine learning tools to select features, create and optimize classifiers Carrying out the preprocessing of structured and unstructured data Enhancing data collection procedures to include all relevant information for developing analytic systems Processing, cleansing, and validating the integrity of data to be used for analysis Analyzing large amounts of information to find patterns and solutions Developing prediction systems and machine learning algorithms Presenting results in a clear manner Propose solutions and strategies to tackle business challenges Collaborate with Business and IT teams Required Skills Programming Skills knowledge of statistical programming languages like R, Python, and database query languages like SQL, Hive, Pig is desirable. Familiarity with Scala, Java, or C++ is an added advantage. Statistics Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency in statistics is essential for data-driven companies. Machine Learning good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM, Decision Forests. Strong Math Skills (Multivariable Calculus and Linear Algebra) understanding the fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis of a lot of predictive performance or algorithm optimization techniques. Data Wrangling proficiency in handling imperfections in data is an important aspect of a data scientist job description. Experience with Data Visualization Tools like matplotlib, ggplot, d3.js., Tableau that help to visually encode data Excellent Communication Skills it is incredibly important to describe findings to a technical and non-technical audience. Strong Software Engineering Background Hands-on experience with data science tools Problem-solving aptitude Analytical mind and great business sense Degree in Computer Science, Engineering or relevant field is preferred Proven Experience as Data Analyst or Data Scientist Excellent career growth opportunities and exposure to multiple technologies. Fixed weekday day schedule, meaning, you ll have your weekends off! Unique leave benefits and encashment options based on performance. Long term growth opportunities. Fun family environment surrounded by experienced developers. Various internal employee rewards programs based on Performance. Opportunities for various other Bonus programs for training hours taken, certifications, special value to business through idea and innovation. Work life Balance flexible work timings, early out Fridays, various social and cultural activities etc.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

How do Technogisers function? Value: Exploring technologies and implementing them on the projects provided they make business sense and deliver value. Engagement: Be it offshore or onshore, we engage ourselves daily with the clients. This assists in building a trustworthy relationship at the same time, collaborating to come up with strategic solutions to business problems. Solution: We are involved in providing hands-on contributions towards Backend & Front-end design and development at the same time, flourishing our DevOps culture. Thought Leadership: Attend or present technical meet-ups/workshops/conferences to share knowledge and help build Technogise brand. Note: All our roles are customer-facing roles. This is a full-time Dynamic-hybrid role as a Technology Consultant (Developer) located in Pune. Core Skills We are looking for 4-8 years and 8-12 years of industry experience exclusively in Java /backend tech/ Full stack You are also an advocate of good engineering practices Influence technical decision-making and high-level design decisions - choice of frameworks and tech approach Demonstrate the ability to understand different approaches for application, and integration and influence decisions by making appropriate trade-offs Ways Of Working You communicate effectively with other roles in the project at the team and client levels You drive discussions effectively at the team and client levels. Encourage others to participate Going Beyond Establish credibility within the team as a result of technical and leadership skills Mentoring fellow team members within the project team and providing technical guidance to others beyond project boundaries Actively participate in organisational initiatives Skills:- Java, NodeJS (Node.js), Go Programming (Golang), Fullstack Developer, Scala and Javascript

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Coimbatore

Work from Office

Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: SCALA, Spark, Python, Data bricks Good to have: Java & Hadoop The Role: Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes. Interested candidates share your resume at Neesha1@damcogroup.com along with the below mentioned details : Total Exp : Relevant Exp in Scala & Spark : Current CTC: Expected CTC: Notice period : Current Location:

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies