From a naive (experience based) perspective implicits and type inference
seem to influence each other.
I could not find this relation in the specification. Am I missing something
or is it simply not there and defined by the current implementation of the
compiler?
Erik
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from
Why doesn't Scala have complete inference like Haskell or OCaml?
Here's a puzzler. Why does this code fail to compile when the
"functionThatTakesWidget(widgetThatDoesNotWork)" line is included? This
fails in Scala 2.10.5 and 2.11.6.
It seems like type inference is choosing "Nothing with SomeTrait" as the
greatest lower bound for A2, despite the fact that a better greatest lower
bound would be "MyObject with SomeTrait"
trait SomeTrait {}
class MyObject
This is a follow up to the following question on SO:
http://stackoverflow.com/questions/15157193/is-there-a-way-to-specify-a-subset-of-type-parameters-in-scala-inferring-the-re
In short:
scala> class X[A, B](b: B)
defined class X
scala> new X(0)
res0: X[Nothing,Int] = X@3339ed6d // Nothing inferred, expected
scala> new X[Int,](0) // infer second type parameter, not working
Is there a
I have the following code:
import scala.reflect.runtime.universe.{TypeTag, typeOf}
trait Root
class A extends Root
class B extends Root
class Monad[T
Hey Guys,
I have been able to successfully execute the new lda algorithm as well as
extract the topic/term inference with vectordump. What I was not able to do
was get the document/topic inference. When I run the same vectordump
command I get the same kinds of vectors (term:probability) as before.
Should the vectors not be (topic:probability)?
The command I run is:
vectordump -s temp/
Hello,
I stumbled upon an oddity in the type inferencer, and I'm wondering if this is a bug or a fundamental limitation. If I have a type variable U and an annotated type:
?U U)p.Seq[U] exist so that it can be applied to arguments (Int @ann => Int @ann)
--- because ---
argument expression's type is not compatible with formal parameter type;
found : Int @ann => Int @ann
required: Int
Hi,
I keep having trouble with type inference for anonymous functions. Here's some real examples:
def S[X, Y](p: P)(f: X => Y): S[X, Y] = ???
// fails with required: ? => ?
val userUpdateDisplayName = S(WritePrivilege) {
(v: ValueId, newDisplayName: String) =>
dao.updateUserDisplayName(v.getObjectName, newDisplayName)
}
// fails because it expected
Hi All,
Sip-18 says "Postfix operators interact poorly with semicolon
inference." I would like to make sure I understand the problem fully.
I tried a few things and it seems that the problem is that if you have
a parameterless or no-arg method you invoke in postfix op notation,
followed by just a return and another value, Scala complains your
method doesn't take a parameter. For example this
Hi all,
the following code does not compile (as of Scala 2.9.2):
object Test {
type Pred[A] = A => Boolean
def not[A](p: Pred[A]): Pred[A] = a => !p(a)
def isEven(i: Int): Boolean = i % 2 == 0
val isEvenVal: Pred[Int] = isEven _
// the compile complains
// type mismatch
// found : Int => Boolean
// required: A => Boolean
def isOdd(i: Int
Sorry, I hate to bring another "Why doesn't type inference work here" type question, but I can't help it, because this seems so obvious:
scala> trait Foo[S] {
| def get: S
| }
defined trait Foo
scala> trait Bar[S] extends Foo[S] {
| abstract override def get: S = super.get
| }
defined trait Bar
scala> val foo = new Foo[String] with Bar
:9: error: trait Bar takes
Hey,
I have to know, after all that time I'm facing it:
http://scastie.org/4523
Really, what the reasoning behind this limitation?
It's very frustrating.
Thanks
Alois
Hello,
I am trying to implement the General Bayesian Inference machine learning
algorithm. The algorithm is really simple as an idea, but since I am new to
the hadoop ecosystem, I lack some experience. Anyway, here is the idea:
From one hand I have a stream of data, and from the
other I have a static table of data. Also, let's
suppose that the streaming data are related to a specific user
Hey all,
I'm curious that bug, you know, when you have to implicits with the same in
scope and the compiler tell you no implicit can be found.
I think it's really bad, because the error message is wrong he should warn
about ambiguity or abuse of shadowing... IDK, but the current situation is
really problematic.
It can sometime can quite a while to find that -in fact- to impl. with the
Ok, so we all know that Scala's type inference has issues with higher kinded types. The workaround of course is to delay inference of types using implicit type constraints.
So, naively, I tried to implement a +: extractor for LinearSeqLike that would extract head :: tail portions of a collection. Here's my first cut:
object +: {
def unapply[T, Coll t.tail) // TODO - Try to remove cast
moving to scala-internals
---------- Forwarded message ----------
From: "Josh Suereth"
Date: Jun 23, 2011 11:38 PM
Subject: Scala Type Inference Challenge (or Giving up on recursive types)
To: , "scala-user"
Ok, so we all know that Scala's type inference has issues with higher kinded types. The workaround of course is to delay inference of types using implicit type constraints.
So, naively
Ok, so we all know that Scala's type inference has issues with higher kinded types. The workaround of course is to delay inference of types using implicit type constraints.
So, naively, I tried to implement a +: extractor for LinearSeqLike that would extract head :: tail portions of a collection. Here's my first cut:
object +: {
def unapply[T, Coll t.tail) // TODO - Try to remove cast
Howdy,
I have a quick question about inferring topic model for a new document
using the learned LDA model (topic probability distribution).
If I have a set of documents and say I got the topic probability
distribution using Mahout LDA for each of these documents, then is it
possible to infer the topic probability distribution for a new document
using the learned model?
If this is not
Hi.
I've been trying for a while to discover which technical focus would be best for building a solution for a business intelligence platform. I'm pretty sure that Semantic Web / RDF / OWL models are very capable of covering all my needs for the solution. But then I knew graph databases and decided to take a look. My needs in either way are of having strong inference features over inconsistent data
scala -version
Scala code runner version 2.9.1.final -- Copyright 2002-2011, LAMP/EPFL
1. scala> val a = List(1,2,3)
a: List[Int] = List(1, 2, 3)
2. scala> val b = List(4,5,6)
b: List[Int] = List(4, 5, 6)
3. scala> a zip b
res18: List[(Int, Int)] = List((1,4), (2,5), (3,6))
4. scala> a.zip(b)
res19: List[(Int, Int)] = List((1,4), (2,5), (3,6))
5. scala> a.zip(b) ++ List((9, 10))
res17